Test Report: Docker_Linux_crio_arm64 22158

                    
                      84cd1e71ac9e612e02e936645952571e7d114b51:2025-12-16:42799
                    
                

Test fail (40/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.36
44 TestAddons/parallel/Registry 15
45 TestAddons/parallel/RegistryCreds 0.51
46 TestAddons/parallel/Ingress 142.17
47 TestAddons/parallel/InspektorGadget 6.26
48 TestAddons/parallel/MetricsServer 6.38
50 TestAddons/parallel/CSI 30.2
51 TestAddons/parallel/Headlamp 3.92
52 TestAddons/parallel/CloudSpanner 6.37
53 TestAddons/parallel/LocalPath 8.37
54 TestAddons/parallel/NvidiaDevicePlugin 6.34
55 TestAddons/parallel/Yakd 6.26
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 503.29
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.28
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.4
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.43
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.63
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 736.29
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.17
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.08
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.72
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.12
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.42
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.62
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.43
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.08
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 99.12
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.25
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.53
293 TestJSONOutput/pause/Command 1.74
299 TestJSONOutput/unpause/Command 1.88
358 TestKubernetesUpgrade 785.81
374 TestPause/serial/Pause 8.62
485 TestNetworkPlugins/group/flannel/DNS 7200.086
x
+
TestAddons/serial/Volcano (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable volcano --alsologtostderr -v=1: exit status 11 (354.787335ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:21.468944  448535 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:21.469768  448535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:21.469813  448535 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:21.469837  448535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:21.470135  448535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:21.470475  448535 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:21.470913  448535 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:21.470961  448535 addons.go:622] checking whether the cluster is paused
	I1216 04:13:21.471097  448535 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:21.471134  448535 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:21.471678  448535 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:21.497395  448535 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:21.497652  448535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:21.521628  448535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:21.656667  448535 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:21.656768  448535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:21.691194  448535 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:21.691223  448535 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:21.691230  448535 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:21.691234  448535 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:21.691238  448535 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:21.691241  448535 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:21.691245  448535 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:21.691248  448535 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:21.691251  448535 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:21.691263  448535 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:21.691271  448535 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:21.691274  448535 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:21.691277  448535 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:21.691281  448535 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:21.691288  448535 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:21.691297  448535 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:21.691308  448535 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:21.691313  448535 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:21.691316  448535 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:21.691320  448535 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:21.691325  448535 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:21.691328  448535 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:21.691331  448535 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:21.691334  448535 cri.go:89] found id: ""
	I1216 04:13:21.691389  448535 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:21.725891  448535 out.go:203] 
	W1216 04:13:21.728817  448535 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:21.728843  448535 out.go:285] * 
	* 
	W1216 04:13:21.734515  448535 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:21.737399  448535 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.180713ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00418034s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003402885s
addons_test.go:394: (dbg) Run:  kubectl --context addons-266389 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-266389 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-266389 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.454251717s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 ip
2025/12/16 04:13:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable registry --alsologtostderr -v=1: exit status 11 (278.312279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:47.783326  449069 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:47.784051  449069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:47.784068  449069 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:47.784073  449069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:47.784335  449069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:47.784625  449069 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:47.785047  449069 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:47.785118  449069 addons.go:622] checking whether the cluster is paused
	I1216 04:13:47.785233  449069 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:47.785250  449069 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:47.785823  449069 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:47.803074  449069 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:47.803135  449069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:47.822071  449069 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:47.923651  449069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:47.923737  449069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:47.965769  449069 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:47.965796  449069 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:47.965801  449069 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:47.965806  449069 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:47.965809  449069 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:47.965813  449069 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:47.965816  449069 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:47.965819  449069 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:47.965822  449069 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:47.965831  449069 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:47.965834  449069 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:47.965837  449069 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:47.965841  449069 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:47.965844  449069 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:47.965849  449069 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:47.965853  449069 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:47.965857  449069 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:47.965861  449069 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:47.965865  449069 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:47.965869  449069 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:47.965874  449069 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:47.965878  449069 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:47.965881  449069 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:47.965884  449069 cri.go:89] found id: ""
	I1216 04:13:47.965934  449069 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:47.984668  449069 out.go:203] 
	W1216 04:13:47.987584  449069 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:47.987987  449069 out.go:285] * 
	* 
	W1216 04:13:47.995065  449069 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:47.998459  449069 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.00s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.119811ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266389
addons_test.go:334: (dbg) Run:  kubectl --context addons-266389 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (262.432753ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:14:24.884275  450911 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:14:24.885247  450911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:24.885264  450911 out.go:374] Setting ErrFile to fd 2...
	I1216 04:14:24.885271  450911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:24.885592  450911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:14:24.885950  450911 mustload.go:66] Loading cluster: addons-266389
	I1216 04:14:24.886421  450911 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:24.886444  450911 addons.go:622] checking whether the cluster is paused
	I1216 04:14:24.886590  450911 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:24.886608  450911 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:14:24.887197  450911 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:14:24.904649  450911 ssh_runner.go:195] Run: systemctl --version
	I1216 04:14:24.904710  450911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:14:24.928322  450911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:14:25.026538  450911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:14:25.026675  450911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:14:25.060526  450911 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:14:25.060555  450911 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:14:25.060562  450911 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:14:25.060566  450911 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:14:25.060569  450911 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:14:25.060573  450911 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:14:25.060576  450911 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:14:25.060579  450911 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:14:25.060582  450911 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:14:25.060595  450911 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:14:25.060598  450911 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:14:25.060602  450911 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:14:25.060606  450911 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:14:25.060608  450911 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:14:25.060612  450911 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:14:25.060621  450911 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:14:25.060630  450911 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:14:25.060637  450911 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:14:25.060640  450911 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:14:25.060643  450911 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:14:25.060648  450911 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:14:25.060651  450911 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:14:25.060654  450911 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:14:25.060656  450911 cri.go:89] found id: ""
	I1216 04:14:25.060713  450911 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:14:25.076180  450911 out.go:203] 
	W1216 04:14:25.079102  450911 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:14:25.079133  450911 out.go:285] * 
	* 
	W1216 04:14:25.084747  450911 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:14:25.087646  450911 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (142.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-266389 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-266389 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-266389 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [598c3be2-6161-4b99-8d89-f60ec4f71763] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [598c3be2-6161-4b99-8d89-f60ec4f71763] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003555018s
I1216 04:14:18.229331  441727 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.453822586s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-266389 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-266389
helpers_test.go:244: (dbg) docker inspect addons-266389:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987",
	        "Created": "2025-12-16T04:11:08.406545814Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 443105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:11:08.475077028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/hosts",
	        "LogPath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987-json.log",
	        "Name": "/addons-266389",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-266389:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-266389",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987",
	                "LowerDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-266389",
	                "Source": "/var/lib/docker/volumes/addons-266389/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-266389",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-266389",
	                "name.minikube.sigs.k8s.io": "addons-266389",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f19a4df5d96066478ebc4cc4326cda23338db4fcd77a621c569300f63befa945",
	            "SandboxKey": "/var/run/docker/netns/f19a4df5d960",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-266389": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:1b:50:b8:c7:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6eef94a8007f7ed82f36cde36f08b7467c5fc8984713511ba3a7c8bb1ab8afa",
	                    "EndpointID": "89f20aab0f197d1ea7f984566ad3de075f8447cc2d02920181350c670158b91a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-266389",
	                        "9c3b592c224e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-266389 -n addons-266389
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-266389 logs -n 25: (1.524386856s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-461022                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-461022 │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ start   │ --download-only -p binary-mirror-260964 --alsologtostderr --binary-mirror http://127.0.0.1:39905 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-260964   │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ delete  │ -p binary-mirror-260964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-260964   │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ addons  │ enable dashboard -p addons-266389                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-266389                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ start   │ -p addons-266389 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:13 UTC │
	│ addons  │ addons-266389 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ ip      │ addons-266389 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │ 16 Dec 25 04:13 UTC │
	│ addons  │ addons-266389 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ ssh     │ addons-266389 ssh cat /opt/local-path-provisioner/pvc-12852da6-9e8a-4765-8a93-15cde56a9879_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │ 16 Dec 25 04:13 UTC │
	│ addons  │ addons-266389 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-266389 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:14 UTC │                     │
	│ addons  │ addons-266389 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:14 UTC │                     │
	│ ssh     │ addons-266389 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:14 UTC │                     │
	│ addons  │ addons-266389 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:14 UTC │                     │
	│ addons  │ addons-266389 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266389                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:14 UTC │ 16 Dec 25 04:14 UTC │
	│ addons  │ addons-266389 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:14 UTC │                     │
	│ ip      │ addons-266389 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:16 UTC │ 16 Dec 25 04:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:11:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:11:01.618400  442720 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:11:01.618611  442720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:01.618639  442720 out.go:374] Setting ErrFile to fd 2...
	I1216 04:11:01.618658  442720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:01.618961  442720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:11:01.619492  442720 out.go:368] Setting JSON to false
	I1216 04:11:01.620382  442720 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10408,"bootTime":1765847854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:11:01.620485  442720 start.go:143] virtualization:  
	I1216 04:11:01.624197  442720 out.go:179] * [addons-266389] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:11:01.627504  442720 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:11:01.627599  442720 notify.go:221] Checking for updates...
	I1216 04:11:01.633683  442720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:11:01.636692  442720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:11:01.640202  442720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:11:01.643173  442720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:11:01.646230  442720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:11:01.649339  442720 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:11:01.688515  442720 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:11:01.688634  442720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:01.742025  442720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:11:01.733059509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:01.742137  442720 docker.go:319] overlay module found
	I1216 04:11:01.745300  442720 out.go:179] * Using the docker driver based on user configuration
	I1216 04:11:01.748234  442720 start.go:309] selected driver: docker
	I1216 04:11:01.748255  442720 start.go:927] validating driver "docker" against <nil>
	I1216 04:11:01.748268  442720 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:11:01.748997  442720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:01.820098  442720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:11:01.810549634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:01.820271  442720 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:11:01.820522  442720 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:11:01.823556  442720 out.go:179] * Using Docker driver with root privileges
	I1216 04:11:01.826668  442720 cni.go:84] Creating CNI manager for ""
	I1216 04:11:01.826747  442720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:11:01.826759  442720 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:11:01.826839  442720 start.go:353] cluster config:
	{Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 04:11:01.830189  442720 out.go:179] * Starting "addons-266389" primary control-plane node in "addons-266389" cluster
	I1216 04:11:01.833039  442720 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:11:01.836179  442720 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:11:01.839074  442720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:11:01.839127  442720 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 04:11:01.839169  442720 cache.go:65] Caching tarball of preloaded images
	I1216 04:11:01.839247  442720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:11:01.839262  442720 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:11:01.839274  442720 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:11:01.839632  442720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/config.json ...
	I1216 04:11:01.839654  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/config.json: {Name:mk928b082baefcda33cbb318ef9234c1ac520635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:01.859858  442720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:11:01.859881  442720 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:11:01.859901  442720 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:11:01.859935  442720 start.go:360] acquireMachinesLock for addons-266389: {Name:mk82ef214a88a1269a11e23e2aa5197425e975a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:11:01.860043  442720 start.go:364] duration metric: took 86.105µs to acquireMachinesLock for "addons-266389"
	I1216 04:11:01.860075  442720 start.go:93] Provisioning new machine with config: &{Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:11:01.860146  442720 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:11:01.863619  442720 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 04:11:01.863887  442720 start.go:159] libmachine.API.Create for "addons-266389" (driver="docker")
	I1216 04:11:01.863925  442720 client.go:173] LocalClient.Create starting
	I1216 04:11:01.864042  442720 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem
	I1216 04:11:01.977945  442720 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem
	I1216 04:11:02.042171  442720 cli_runner.go:164] Run: docker network inspect addons-266389 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:11:02.059779  442720 cli_runner.go:211] docker network inspect addons-266389 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:11:02.059871  442720 network_create.go:284] running [docker network inspect addons-266389] to gather additional debugging logs...
	I1216 04:11:02.059892  442720 cli_runner.go:164] Run: docker network inspect addons-266389
	W1216 04:11:02.076169  442720 cli_runner.go:211] docker network inspect addons-266389 returned with exit code 1
	I1216 04:11:02.076226  442720 network_create.go:287] error running [docker network inspect addons-266389]: docker network inspect addons-266389: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-266389 not found
	I1216 04:11:02.076241  442720 network_create.go:289] output of [docker network inspect addons-266389]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-266389 not found
	
	** /stderr **
	I1216 04:11:02.076344  442720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:02.093057  442720 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ba760}
	I1216 04:11:02.093117  442720 network_create.go:124] attempt to create docker network addons-266389 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 04:11:02.093181  442720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-266389 addons-266389
	I1216 04:11:02.154824  442720 network_create.go:108] docker network addons-266389 192.168.49.0/24 created
	I1216 04:11:02.154856  442720 kic.go:121] calculated static IP "192.168.49.2" for the "addons-266389" container
	I1216 04:11:02.154953  442720 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:11:02.171697  442720 cli_runner.go:164] Run: docker volume create addons-266389 --label name.minikube.sigs.k8s.io=addons-266389 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:11:02.188990  442720 oci.go:103] Successfully created a docker volume addons-266389
	I1216 04:11:02.189257  442720 cli_runner.go:164] Run: docker run --rm --name addons-266389-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-266389 --entrypoint /usr/bin/test -v addons-266389:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:11:04.340423  442720 cli_runner.go:217] Completed: docker run --rm --name addons-266389-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-266389 --entrypoint /usr/bin/test -v addons-266389:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib: (2.151123081s)
	I1216 04:11:04.340456  442720 oci.go:107] Successfully prepared a docker volume addons-266389
	I1216 04:11:04.340504  442720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:11:04.340518  442720 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:11:04.340592  442720 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-266389:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:11:08.318344  442720 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-266389:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.977709177s)
	I1216 04:11:08.318382  442720 kic.go:203] duration metric: took 3.977860333s to extract preloaded images to volume ...
	W1216 04:11:08.318528  442720 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:11:08.318643  442720 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:11:08.390990  442720 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-266389 --name addons-266389 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-266389 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-266389 --network addons-266389 --ip 192.168.49.2 --volume addons-266389:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:11:08.713456  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Running}}
	I1216 04:11:08.734404  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:08.759829  442720 cli_runner.go:164] Run: docker exec addons-266389 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:11:08.817569  442720 oci.go:144] the created container "addons-266389" has a running status.
	I1216 04:11:08.817598  442720 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa...
	I1216 04:11:09.050275  442720 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:11:09.081399  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:09.107862  442720 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:11:09.107880  442720 kic_runner.go:114] Args: [docker exec --privileged addons-266389 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:11:09.167433  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:09.193351  442720 machine.go:94] provisionDockerMachine start ...
	I1216 04:11:09.193449  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:09.219033  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:09.219399  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:09.219409  442720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:11:09.220052  442720 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58428->127.0.0.1:33133: read: connection reset by peer
	I1216 04:11:12.356940  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266389
	
	I1216 04:11:12.356975  442720 ubuntu.go:182] provisioning hostname "addons-266389"
	I1216 04:11:12.357045  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:12.375362  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:12.375677  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:12.375694  442720 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-266389 && echo "addons-266389" | sudo tee /etc/hostname
	I1216 04:11:12.519476  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266389
	
	I1216 04:11:12.519586  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:12.537725  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:12.538050  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:12.538076  442720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-266389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-266389/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-266389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:11:12.669282  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:11:12.669310  442720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:11:12.669332  442720 ubuntu.go:190] setting up certificates
	I1216 04:11:12.669348  442720 provision.go:84] configureAuth start
	I1216 04:11:12.669413  442720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-266389
	I1216 04:11:12.688045  442720 provision.go:143] copyHostCerts
	I1216 04:11:12.688128  442720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:11:12.688265  442720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:11:12.688327  442720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:11:12.688391  442720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.addons-266389 san=[127.0.0.1 192.168.49.2 addons-266389 localhost minikube]
	I1216 04:11:12.892884  442720 provision.go:177] copyRemoteCerts
	I1216 04:11:12.892960  442720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:11:12.893011  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:12.910010  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.008547  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 04:11:13.028110  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:11:13.046294  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:11:13.064187  442720 provision.go:87] duration metric: took 394.81974ms to configureAuth
	I1216 04:11:13.064215  442720 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:11:13.064423  442720 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:11:13.064530  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.082250  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:13.082564  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:13.082584  442720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:11:13.523886  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:11:13.523954  442720 machine.go:97] duration metric: took 4.330582197s to provisionDockerMachine
	I1216 04:11:13.523981  442720 client.go:176] duration metric: took 11.6600486s to LocalClient.Create
	I1216 04:11:13.524028  442720 start.go:167] duration metric: took 11.660127658s to libmachine.API.Create "addons-266389"
	I1216 04:11:13.524054  442720 start.go:293] postStartSetup for "addons-266389" (driver="docker")
	I1216 04:11:13.524078  442720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:11:13.524184  442720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:11:13.524270  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.542554  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.641578  442720 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:11:13.645355  442720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:11:13.645384  442720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:11:13.645396  442720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:11:13.645470  442720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:11:13.645511  442720 start.go:296] duration metric: took 121.437067ms for postStartSetup
	I1216 04:11:13.645846  442720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-266389
	I1216 04:11:13.665774  442720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/config.json ...
	I1216 04:11:13.666088  442720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:11:13.666142  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.683640  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.778684  442720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:11:13.783664  442720 start.go:128] duration metric: took 11.923501736s to createHost
	I1216 04:11:13.783687  442720 start.go:83] releasing machines lock for "addons-266389", held for 11.923629934s
	I1216 04:11:13.783756  442720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-266389
	I1216 04:11:13.801186  442720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:11:13.801261  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.801493  442720 ssh_runner.go:195] Run: cat /version.json
	I1216 04:11:13.801537  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.828452  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.829910  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:14.014366  442720 ssh_runner.go:195] Run: systemctl --version
	I1216 04:11:14.021120  442720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:11:14.067580  442720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:11:14.072206  442720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:11:14.072314  442720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:11:14.107285  442720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:11:14.107320  442720 start.go:496] detecting cgroup driver to use...
	I1216 04:11:14.107378  442720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:11:14.107457  442720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:11:14.125666  442720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:11:14.138359  442720 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:11:14.138467  442720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:11:14.156397  442720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:11:14.175606  442720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:11:14.299643  442720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:11:14.419483  442720 docker.go:234] disabling docker service ...
	I1216 04:11:14.419552  442720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:11:14.444416  442720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:11:14.457800  442720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:11:14.572505  442720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:11:14.693554  442720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:11:14.707822  442720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:11:14.723294  442720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:11:14.723415  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.732798  442720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:11:14.732923  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.742029  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.750586  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.759604  442720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:11:14.767481  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.776443  442720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.790173  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.799199  442720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:11:14.806986  442720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:11:14.814637  442720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:14.926134  442720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:11:15.117578  442720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:11:15.117672  442720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:11:15.121839  442720 start.go:564] Will wait 60s for crictl version
	I1216 04:11:15.121946  442720 ssh_runner.go:195] Run: which crictl
	I1216 04:11:15.125981  442720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:11:15.159054  442720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:11:15.159187  442720 ssh_runner.go:195] Run: crio --version
	I1216 04:11:15.194445  442720 ssh_runner.go:195] Run: crio --version
	I1216 04:11:15.228816  442720 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 04:11:15.231600  442720 cli_runner.go:164] Run: docker network inspect addons-266389 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:15.247753  442720 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:11:15.251792  442720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:15.261795  442720 kubeadm.go:884] updating cluster {Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:11:15.261911  442720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:11:15.261973  442720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:15.303415  442720 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:11:15.303442  442720 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:11:15.303497  442720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:15.328121  442720 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:11:15.328145  442720 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:11:15.328153  442720 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 04:11:15.328253  442720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-266389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:11:15.328344  442720 ssh_runner.go:195] Run: crio config
	I1216 04:11:15.385689  442720 cni.go:84] Creating CNI manager for ""
	I1216 04:11:15.385710  442720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:11:15.385730  442720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:11:15.385773  442720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-266389 NodeName:addons-266389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:11:15.385928  442720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-266389"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:11:15.386002  442720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 04:11:15.393911  442720 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:11:15.393986  442720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:11:15.401743  442720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 04:11:15.414301  442720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 04:11:15.427771  442720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 04:11:15.440648  442720 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:11:15.444292  442720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:15.454860  442720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:15.574219  442720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:15.593745  442720 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389 for IP: 192.168.49.2
	I1216 04:11:15.593771  442720 certs.go:195] generating shared ca certs ...
	I1216 04:11:15.593788  442720 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:15.593991  442720 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:11:16.288935  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt ...
	I1216 04:11:16.288971  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt: {Name:mkef7cc8e40cf9cf18882fc19685f38beb3555c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.289180  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key ...
	I1216 04:11:16.289194  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key: {Name:mk95ea541e007c7a661178f6b17e1b58b4611c6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.289282  442720 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:11:16.538924  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt ...
	I1216 04:11:16.538960  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt: {Name:mk467ffb251f3855bd5f201ad1a531b5d81ec1b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.539148  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key ...
	I1216 04:11:16.539161  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key: {Name:mkfc5e95d42754d910610cfe88527f26994e5612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.539245  442720 certs.go:257] generating profile certs ...
	I1216 04:11:16.539310  442720 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.key
	I1216 04:11:16.539326  442720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt with IP's: []
	I1216 04:11:16.934282  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt ...
	I1216 04:11:16.934318  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: {Name:mk722f651548c20b8e386acd15601cc2b9235cd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.934510  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.key ...
	I1216 04:11:16.934524  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.key: {Name:mk974ebc87a89355626c1d66a8f9a00bb589e1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.934613  442720 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e
	I1216 04:11:16.934632  442720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 04:11:17.147468  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e ...
	I1216 04:11:17.147503  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e: {Name:mk082d56ec7a26652ba27537bb6baa1777f23918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.147684  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e ...
	I1216 04:11:17.147704  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e: {Name:mkd71827f418350327e2411ff753dff35207360e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.147809  442720 certs.go:382] copying /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e -> /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt
	I1216 04:11:17.147898  442720 certs.go:386] copying /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e -> /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key
	I1216 04:11:17.147951  442720 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key
	I1216 04:11:17.147973  442720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt with IP's: []
	I1216 04:11:17.375348  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt ...
	I1216 04:11:17.375379  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt: {Name:mk05692efb75cf03d41d0c1f39bc0a2b14ef23e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.375556  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key ...
	I1216 04:11:17.375570  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key: {Name:mk4b7f58427ab539dd343313509f686816ea3d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.375799  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:11:17.375846  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:11:17.375879  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:11:17.375909  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:11:17.376592  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:11:17.395029  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:11:17.412856  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:11:17.431212  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:11:17.449283  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 04:11:17.467280  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:11:17.485053  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:11:17.503016  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 04:11:17.521340  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:11:17.539164  442720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:11:17.552410  442720 ssh_runner.go:195] Run: openssl version
	I1216 04:11:17.558830  442720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.566721  442720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:11:17.574660  442720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.578437  442720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.578502  442720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.620579  442720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:11:17.628308  442720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:11:17.636469  442720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:11:17.640339  442720 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:11:17.640390  442720 kubeadm.go:401] StartCluster: {Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:17.640461  442720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:11:17.640540  442720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:11:17.667266  442720 cri.go:89] found id: ""
	I1216 04:11:17.667387  442720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:11:17.675136  442720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:11:17.682929  442720 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:11:17.682993  442720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:11:17.690795  442720 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:11:17.690817  442720 kubeadm.go:158] found existing configuration files:
	
	I1216 04:11:17.690889  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:11:17.698705  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:11:17.698770  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:11:17.705799  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:11:17.713044  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:11:17.713181  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:11:17.720495  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:11:17.728032  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:11:17.728097  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:11:17.735299  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:11:17.742937  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:11:17.742999  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:11:17.750293  442720 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:11:17.787269  442720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 04:11:17.787363  442720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:11:17.823254  442720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:11:17.823328  442720 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:11:17.823368  442720 kubeadm.go:319] OS: Linux
	I1216 04:11:17.823424  442720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:11:17.823484  442720 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:11:17.823539  442720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:11:17.823591  442720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:11:17.823647  442720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:11:17.823701  442720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:11:17.823749  442720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:11:17.823800  442720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:11:17.823849  442720 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:11:17.905734  442720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:11:17.905889  442720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:11:17.906025  442720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:11:17.914672  442720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:11:17.919039  442720 out.go:252]   - Generating certificates and keys ...
	I1216 04:11:17.919159  442720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:11:17.919239  442720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:11:18.132156  442720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:11:18.826033  442720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:11:19.319790  442720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:11:19.827992  442720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:11:20.080061  442720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:11:20.081684  442720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-266389 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:11:20.132428  442720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:11:20.132746  442720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-266389 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:11:20.266705  442720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:11:21.021209  442720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:11:21.581413  442720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:11:21.581703  442720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:11:22.095204  442720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:11:22.540202  442720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:11:23.210649  442720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:11:23.903829  442720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:11:24.135916  442720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:11:24.136484  442720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:11:24.139810  442720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:11:24.143250  442720 out.go:252]   - Booting up control plane ...
	I1216 04:11:24.143360  442720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:11:24.143437  442720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:11:24.144899  442720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:11:24.160619  442720 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:11:24.160969  442720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:11:24.169758  442720 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:11:24.170072  442720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:11:24.170365  442720 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:11:24.305587  442720 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:11:24.305712  442720 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:11:25.299921  442720 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000669394s
	I1216 04:11:25.303657  442720 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 04:11:25.303752  442720 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 04:11:25.303850  442720 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 04:11:25.303937  442720 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 04:11:27.645654  442720 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.341374418s
	I1216 04:11:29.058180  442720 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.754443072s
	I1216 04:11:30.805369  442720 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501539521s
	I1216 04:11:30.837124  442720 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 04:11:30.858867  442720 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 04:11:30.877394  442720 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 04:11:30.877877  442720 kubeadm.go:319] [mark-control-plane] Marking the node addons-266389 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 04:11:30.889770  442720 kubeadm.go:319] [bootstrap-token] Using token: lcp9n3.z0gj24q1nalp0g4f
	I1216 04:11:30.892718  442720 out.go:252]   - Configuring RBAC rules ...
	I1216 04:11:30.892852  442720 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 04:11:30.899218  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 04:11:30.907983  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 04:11:30.912496  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 04:11:30.916621  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 04:11:30.920796  442720 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 04:11:31.213333  442720 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 04:11:31.655878  442720 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 04:11:32.211912  442720 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 04:11:32.213108  442720 kubeadm.go:319] 
	I1216 04:11:32.213187  442720 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 04:11:32.213210  442720 kubeadm.go:319] 
	I1216 04:11:32.213294  442720 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 04:11:32.213302  442720 kubeadm.go:319] 
	I1216 04:11:32.213328  442720 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 04:11:32.213390  442720 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 04:11:32.213445  442720 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 04:11:32.213453  442720 kubeadm.go:319] 
	I1216 04:11:32.213515  442720 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 04:11:32.213523  442720 kubeadm.go:319] 
	I1216 04:11:32.213571  442720 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 04:11:32.213578  442720 kubeadm.go:319] 
	I1216 04:11:32.213630  442720 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 04:11:32.213709  442720 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 04:11:32.213783  442720 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 04:11:32.213791  442720 kubeadm.go:319] 
	I1216 04:11:32.213875  442720 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 04:11:32.213956  442720 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 04:11:32.213963  442720 kubeadm.go:319] 
	I1216 04:11:32.214047  442720 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lcp9n3.z0gj24q1nalp0g4f \
	I1216 04:11:32.214154  442720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e08298e77cafc423d9b109ab7877d99e66f943a14d7b74758966013799c879bb \
	I1216 04:11:32.214177  442720 kubeadm.go:319] 	--control-plane 
	I1216 04:11:32.214181  442720 kubeadm.go:319] 
	I1216 04:11:32.214270  442720 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 04:11:32.214276  442720 kubeadm.go:319] 
	I1216 04:11:32.214359  442720 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lcp9n3.z0gj24q1nalp0g4f \
	I1216 04:11:32.214461  442720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e08298e77cafc423d9b109ab7877d99e66f943a14d7b74758966013799c879bb 
	I1216 04:11:32.218506  442720 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 04:11:32.218744  442720 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:11:32.218849  442720 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:11:32.218866  442720 cni.go:84] Creating CNI manager for ""
	I1216 04:11:32.218874  442720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:11:32.222331  442720 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 04:11:32.225240  442720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 04:11:32.229638  442720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 04:11:32.229661  442720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 04:11:32.244551  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 04:11:32.531564  442720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 04:11:32.531688  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:32.531784  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-266389 minikube.k8s.io/updated_at=2025_12_16T04_11_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=addons-266389 minikube.k8s.io/primary=true
	I1216 04:11:32.780880  442720 ops.go:34] apiserver oom_adj: -16
	I1216 04:11:32.780992  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:33.281832  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:33.781970  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:34.281971  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:34.781894  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:35.282078  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:35.781247  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:36.281181  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:36.781627  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:37.281853  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:37.376291  442720 kubeadm.go:1114] duration metric: took 4.844646206s to wait for elevateKubeSystemPrivileges
	I1216 04:11:37.376323  442720 kubeadm.go:403] duration metric: took 19.735936496s to StartCluster
	I1216 04:11:37.376341  442720 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:37.376453  442720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:11:37.376874  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:37.377129  442720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:11:37.377277  442720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 04:11:37.377545  442720 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:11:37.377588  442720 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 04:11:37.377668  442720 addons.go:70] Setting yakd=true in profile "addons-266389"
	I1216 04:11:37.377683  442720 addons.go:239] Setting addon yakd=true in "addons-266389"
	I1216 04:11:37.377706  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.378213  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.378470  442720 addons.go:70] Setting inspektor-gadget=true in profile "addons-266389"
	I1216 04:11:37.378499  442720 addons.go:239] Setting addon inspektor-gadget=true in "addons-266389"
	I1216 04:11:37.378521  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.378978  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.379465  442720 addons.go:70] Setting metrics-server=true in profile "addons-266389"
	I1216 04:11:37.379483  442720 addons.go:239] Setting addon metrics-server=true in "addons-266389"
	I1216 04:11:37.379503  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.379905  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.384071  442720 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-266389"
	I1216 04:11:37.384116  442720 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-266389"
	I1216 04:11:37.384169  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.384835  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385230  442720 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-266389"
	I1216 04:11:37.385361  442720 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-266389"
	I1216 04:11:37.385987  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.389823  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385523  442720 addons.go:70] Setting cloud-spanner=true in profile "addons-266389"
	I1216 04:11:37.390783  442720 addons.go:239] Setting addon cloud-spanner=true in "addons-266389"
	I1216 04:11:37.390986  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.391508  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385535  442720 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-266389"
	I1216 04:11:37.418169  442720 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-266389"
	I1216 04:11:37.418205  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.418704  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385547  442720 addons.go:70] Setting default-storageclass=true in profile "addons-266389"
	I1216 04:11:37.427450  442720 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-266389"
	I1216 04:11:37.385554  442720 addons.go:70] Setting gcp-auth=true in profile "addons-266389"
	I1216 04:11:37.441336  442720 mustload.go:66] Loading cluster: addons-266389
	I1216 04:11:37.441705  442720 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:11:37.442134  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.444485  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385560  442720 addons.go:70] Setting ingress=true in profile "addons-266389"
	I1216 04:11:37.467393  442720 addons.go:239] Setting addon ingress=true in "addons-266389"
	I1216 04:11:37.467507  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.468022  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385570  442720 addons.go:70] Setting ingress-dns=true in profile "addons-266389"
	I1216 04:11:37.481891  442720 addons.go:239] Setting addon ingress-dns=true in "addons-266389"
	I1216 04:11:37.481963  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.482684  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385623  442720 out.go:179] * Verifying Kubernetes components...
	I1216 04:11:37.509141  442720 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 04:11:37.509370  442720 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 04:11:37.385803  442720 addons.go:70] Setting volcano=true in profile "addons-266389"
	I1216 04:11:37.385817  442720 addons.go:70] Setting registry=true in profile "addons-266389"
	I1216 04:11:37.385825  442720 addons.go:70] Setting registry-creds=true in profile "addons-266389"
	I1216 04:11:37.385835  442720 addons.go:70] Setting storage-provisioner=true in profile "addons-266389"
	I1216 04:11:37.385847  442720 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-266389"
	I1216 04:11:37.385897  442720 addons.go:70] Setting volumesnapshots=true in profile "addons-266389"
	I1216 04:11:37.518809  442720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:37.532741  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 04:11:37.532781  442720 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 04:11:37.532885  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.538694  442720 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 04:11:37.539309  442720 addons.go:239] Setting addon registry=true in "addons-266389"
	I1216 04:11:37.539451  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.540324  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.552513  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 04:11:37.552606  442720 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 04:11:37.552781  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.553776  442720 addons.go:239] Setting addon registry-creds=true in "addons-266389"
	I1216 04:11:37.553836  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.554481  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.554945  442720 addons.go:239] Setting addon volcano=true in "addons-266389"
	I1216 04:11:37.555000  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.564892  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.567270  442720 addons.go:239] Setting addon storage-provisioner=true in "addons-266389"
	I1216 04:11:37.567329  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.567955  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.574846  442720 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 04:11:37.593312  442720 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:11:37.593338  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 04:11:37.593437  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.597439  442720 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-266389"
	I1216 04:11:37.597963  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.612273  442720 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 04:11:37.615265  442720 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 04:11:37.615292  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 04:11:37.615370  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.618042  442720 addons.go:239] Setting addon volumesnapshots=true in "addons-266389"
	I1216 04:11:37.618103  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.618610  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.637245  442720 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 04:11:37.637678  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.640271  442720 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:11:37.640291  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 04:11:37.640353  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.641000  442720 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:11:37.641026  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 04:11:37.641183  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.660773  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 04:11:37.663915  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 04:11:37.669228  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 04:11:37.679599  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 04:11:37.681637  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 04:11:37.704051  442720 addons.go:239] Setting addon default-storageclass=true in "addons-266389"
	I1216 04:11:37.704091  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.704491  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.717021  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 04:11:37.717576  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.728585  442720 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 04:11:37.729114  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:11:37.739138  442720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 04:11:37.743221  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 04:11:37.775769  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:11:37.788982  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.790990  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 04:11:37.795379  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 04:11:37.801469  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 04:11:37.801499  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 04:11:37.801615  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.811643  442720 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:11:37.811741  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 04:11:37.811933  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.830765  442720 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:11:37.830795  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 04:11:37.830912  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.879360  442720 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-266389"
	I1216 04:11:37.879410  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.879831  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.887977  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.896775  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:37.898398  442720 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 04:11:37.922941  442720 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 04:11:37.926048  442720 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:11:37.926075  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 04:11:37.926168  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.937618  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.948515  442720 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 04:11:37.951478  442720 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 04:11:37.955085  442720 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 04:11:37.955110  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 04:11:37.955181  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.982840  442720 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:37.982903  442720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:11:37.982993  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.984790  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 04:11:37.989051  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 04:11:37.989107  442720 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 04:11:37.989171  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:38.008253  442720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:11:38.011729  442720 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:38.011759  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:11:38.011846  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:38.052406  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.078035  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.111583  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.111928  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.122495  442720 out.go:179]   - Using image docker.io/busybox:stable
	I1216 04:11:38.126152  442720 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 04:11:38.130397  442720 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:11:38.130421  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 04:11:38.130489  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:38.131198  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.157373  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.160860  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.173910  442720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:38.176196  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:38.184638  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.184688  442720 retry.go:31] will retry after 155.541845ms: ssh: handshake failed: EOF
	W1216 04:11:38.184834  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.184844  442720 retry.go:31] will retry after 127.179581ms: ssh: handshake failed: EOF
	I1216 04:11:38.201215  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:38.202928  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.202966  442720 retry.go:31] will retry after 227.368976ms: ssh: handshake failed: EOF
	I1216 04:11:38.213269  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:38.345851  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.345879  442720 retry.go:31] will retry after 504.257003ms: ssh: handshake failed: EOF
	I1216 04:11:38.526161  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 04:11:38.526181  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 04:11:38.583977  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 04:11:38.584045  442720 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 04:11:38.739168  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 04:11:38.739196  442720 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 04:11:38.831416  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 04:11:38.831446  442720 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 04:11:38.880782  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 04:11:38.934647  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 04:11:38.934689  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 04:11:38.947299  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:11:38.972222  442720 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 04:11:38.972251  442720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 04:11:38.973924  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:11:38.973950  442720 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 04:11:38.976344  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:11:38.976489  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:11:39.025027  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:11:39.028602  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:11:39.047754  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:11:39.083060  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 04:11:39.083091  442720 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 04:11:39.088865  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:39.148946  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:11:39.156730  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 04:11:39.156759  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 04:11:39.163455  442720 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 04:11:39.163482  442720 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 04:11:39.167154  442720 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 04:11:39.167177  442720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 04:11:39.211201  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:11:39.281887  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:11:39.281914  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 04:11:39.339569  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:39.340734  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 04:11:39.340758  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 04:11:39.387969  442720 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:11:39.387995  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 04:11:39.388302  442720 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 04:11:39.388315  442720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 04:11:39.428935  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:11:39.439896  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 04:11:39.439969  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 04:11:39.529400  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 04:11:39.529487  442720 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 04:11:39.560858  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 04:11:39.560934  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 04:11:39.567396  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:11:39.741290  442720 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:11:39.741363  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 04:11:39.762045  442720 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.022869955s)
	I1216 04:11:39.762122  442720 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 04:11:39.763118  442720 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.589176011s)
	I1216 04:11:39.763793  442720 node_ready.go:35] waiting up to 6m0s for node "addons-266389" to be "Ready" ...
	I1216 04:11:39.806419  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 04:11:39.806500  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 04:11:40.109312  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 04:11:40.109378  442720 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 04:11:40.182430  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:11:40.266369  442720 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-266389" context rescaled to 1 replicas
	I1216 04:11:40.319228  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 04:11:40.319254  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 04:11:40.487048  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 04:11:40.487073  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 04:11:40.592564  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:11:40.592600  442720 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 04:11:40.798855  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:11:41.155354  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.274495408s)
	I1216 04:11:41.155427  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.208102655s)
	W1216 04:11:41.774834  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:43.139076  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.162694263s)
	I1216 04:11:43.139294  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.162786309s)
	W1216 04:11:43.776802  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:43.938199  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.913131596s)
	I1216 04:11:43.938232  442720 addons.go:495] Verifying addon ingress=true in "addons-266389"
	I1216 04:11:43.938443  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.909794826s)
	I1216 04:11:43.938491  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.890713775s)
	I1216 04:11:43.938570  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.849681197s)
	I1216 04:11:43.938639  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.789668256s)
	I1216 04:11:43.938651  442720 addons.go:495] Verifying addon metrics-server=true in "addons-266389"
	I1216 04:11:43.938702  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.72747972s)
	I1216 04:11:43.938744  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.599146238s)
	I1216 04:11:43.938770  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.509757609s)
	I1216 04:11:43.938933  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.371462908s)
	I1216 04:11:43.938946  442720 addons.go:495] Verifying addon registry=true in "addons-266389"
	I1216 04:11:43.939099  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.756585651s)
	W1216 04:11:43.939125  442720 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:11:43.939139  442720 retry.go:31] will retry after 300.130251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:11:43.941420  442720 out.go:179] * Verifying ingress addon...
	I1216 04:11:43.943434  442720 out.go:179] * Verifying registry addon...
	I1216 04:11:43.943490  442720 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-266389 service yakd-dashboard -n yakd-dashboard
	
	I1216 04:11:43.947202  442720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 04:11:43.947997  442720 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 04:11:43.955378  442720 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:11:43.955403  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:43.955915  442720 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 04:11:43.955934  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:44.185502  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.386592453s)
	I1216 04:11:44.185587  442720 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-266389"
	I1216 04:11:44.189005  442720 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 04:11:44.191933  442720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 04:11:44.200370  442720 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:11:44.200399  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:44.240461  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:11:44.452570  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:44.452841  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:44.696072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:44.951297  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:44.951894  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:45.196218  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:45.285041  442720 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 04:11:45.285184  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:45.307855  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:45.414682  442720 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 04:11:45.428338  442720 addons.go:239] Setting addon gcp-auth=true in "addons-266389"
	I1216 04:11:45.428385  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:45.428829  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:45.447012  442720 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 04:11:45.447063  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:45.451607  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:45.455119  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:45.467407  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:45.695035  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:45.951001  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:45.952289  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:46.195382  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:46.267143  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:46.450630  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:46.451768  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:46.697135  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:46.947530  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.707021381s)
	I1216 04:11:46.947667  442720 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.500630917s)
	I1216 04:11:46.950923  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:11:46.951469  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:46.951947  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:46.956641  442720 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 04:11:46.959452  442720 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 04:11:46.959520  442720 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 04:11:46.973347  442720 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 04:11:46.973412  442720 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 04:11:46.986433  442720 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:11:46.986456  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 04:11:47.000044  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:11:47.196025  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:47.455705  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:47.456056  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:47.537360  442720 addons.go:495] Verifying addon gcp-auth=true in "addons-266389"
	I1216 04:11:47.542492  442720 out.go:179] * Verifying gcp-auth addon...
	I1216 04:11:47.546221  442720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 04:11:47.556267  442720 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 04:11:47.556335  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:47.695461  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:47.951111  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:47.951370  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:48.050355  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:48.195729  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:48.451265  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:48.452363  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:48.549518  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:48.695667  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:48.767954  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:48.950394  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:48.950964  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:49.050377  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:49.195354  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:49.450216  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:49.451710  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:49.549516  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:49.695686  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:49.951437  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:49.951587  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:50.050072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:50.194963  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:50.451311  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:50.451524  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:50.549470  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:50.696145  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:50.951513  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:50.951646  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:51.050022  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:51.195246  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:51.267100  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:51.451350  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:51.451492  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:51.558447  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:51.695502  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:51.950721  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:51.951449  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:52.049489  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:52.196120  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:52.451396  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:52.451830  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:52.549515  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:52.695760  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:52.951232  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:52.951415  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:53.049529  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:53.195471  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:53.267221  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:53.450150  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:53.451603  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:53.549273  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:53.695139  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:53.950640  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:53.952158  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:54.049174  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:54.194936  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:54.451623  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:54.452119  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:54.549956  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:54.695475  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:54.950026  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:54.951130  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:55.049347  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:55.195220  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:55.267585  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:55.450627  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:55.451976  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:55.549908  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:55.696016  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:55.950358  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:55.951829  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:56.049667  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:56.195342  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:56.450506  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:56.451114  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:56.549923  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:56.696164  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:56.951560  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:56.951999  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:57.049744  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:57.195601  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:57.451003  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:57.451390  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:57.549691  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:57.695638  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:57.768061  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:57.950344  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:57.951182  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:58.050333  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:58.195266  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:58.451342  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:58.451702  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:58.549743  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:58.695597  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:58.950220  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:58.951503  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:59.049277  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:59.195867  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:59.453322  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:59.453753  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:59.549795  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:59.696097  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:59.950917  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:59.951113  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:00.050673  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:00.210136  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:00.271659  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:00.454084  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:00.454370  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:00.550152  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:00.695242  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:00.951243  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:00.951306  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:01.049025  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:01.195106  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:01.449986  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:01.451411  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:01.549028  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:01.695333  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:01.951580  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:01.951776  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:02.049971  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:02.194858  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:02.452974  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:02.453212  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:02.550175  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:02.695677  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:02.767808  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:02.951358  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:02.951413  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:03.048970  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:03.194806  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:03.451660  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:03.451956  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:03.549885  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:03.695728  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:03.950918  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:03.951209  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:04.050048  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:04.195150  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:04.451587  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:04.451721  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:04.549600  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:04.696019  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:04.768600  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:04.950746  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:04.951404  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:05.049622  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:05.195614  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:05.450829  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:05.451073  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:05.549986  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:05.694990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:05.950677  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:05.951439  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:06.056875  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:06.195034  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:06.450737  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:06.451447  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:06.549288  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:06.696089  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:06.950487  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:06.951515  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:07.049611  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:07.195564  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:07.267473  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:07.450274  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:07.451616  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:07.549484  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:07.695833  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:07.951291  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:07.951765  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:08.049994  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:08.194990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:08.451158  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:08.451491  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:08.549115  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:08.695045  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:08.951968  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:08.952121  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:09.050102  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:09.194797  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:09.450757  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:09.450934  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:09.550003  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:09.695104  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:09.766748  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:09.950986  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:09.951319  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:10.049483  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:10.195383  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:10.451076  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:10.451416  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:10.549291  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:10.695765  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:10.951579  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:10.951689  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:11.049573  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:11.195858  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:11.451465  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:11.453056  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:11.551009  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:11.695260  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:11.766991  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:11.950149  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:11.950798  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:12.049627  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:12.195559  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:12.451303  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:12.451540  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:12.549266  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:12.695757  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:12.950357  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:12.951571  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:13.049796  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:13.196017  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:13.450296  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:13.451283  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:13.549423  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:13.696375  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:13.767110  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:13.950285  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:13.950950  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:14.049927  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:14.195774  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:14.450863  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:14.451103  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:14.548998  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:14.695196  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:14.950450  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:14.951070  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:15.050460  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:15.195594  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:15.451376  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:15.451500  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:15.549529  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:15.695388  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:15.768326  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:15.950863  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:15.951487  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:16.049681  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:16.196044  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:16.451315  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:16.451473  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:16.554319  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:16.695453  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:16.951695  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:16.951812  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:17.049695  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:17.195745  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:17.450974  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:17.451072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:17.549795  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:17.695109  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:17.952208  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:17.952350  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:18.049532  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:18.195506  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:18.267306  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:18.451228  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:18.451353  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:18.549390  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:18.695196  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:18.950990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:18.951135  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:19.050001  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:19.213202  442720 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:12:19.213231  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:19.291087  442720 node_ready.go:49] node "addons-266389" is "Ready"
	I1216 04:12:19.291128  442720 node_ready.go:38] duration metric: took 39.527281242s for node "addons-266389" to be "Ready" ...
	I1216 04:12:19.291142  442720 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:12:19.291201  442720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.312218  442720 api_server.go:72] duration metric: took 41.935045837s to wait for apiserver process to appear ...
	I1216 04:12:19.312245  442720 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:12:19.312266  442720 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 04:12:19.326704  442720 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 04:12:19.328620  442720 api_server.go:141] control plane version: v1.34.2
	I1216 04:12:19.328651  442720 api_server.go:131] duration metric: took 16.39836ms to wait for apiserver health ...
	I1216 04:12:19.328661  442720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 04:12:19.339803  442720 system_pods.go:59] 19 kube-system pods found
	I1216 04:12:19.339841  442720 system_pods.go:61] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:19.339848  442720 system_pods.go:61] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending
	I1216 04:12:19.339855  442720 system_pods.go:61] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending
	I1216 04:12:19.339860  442720 system_pods.go:61] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending
	I1216 04:12:19.339864  442720 system_pods.go:61] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:19.339870  442720 system_pods.go:61] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:19.339875  442720 system_pods.go:61] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:19.339879  442720 system_pods.go:61] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:19.339882  442720 system_pods.go:61] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending
	I1216 04:12:19.339886  442720 system_pods.go:61] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:19.339890  442720 system_pods.go:61] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:19.339896  442720 system_pods.go:61] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending
	I1216 04:12:19.339900  442720 system_pods.go:61] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending
	I1216 04:12:19.339907  442720 system_pods.go:61] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending
	I1216 04:12:19.339911  442720 system_pods.go:61] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending
	I1216 04:12:19.339915  442720 system_pods.go:61] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending
	I1216 04:12:19.339935  442720 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending
	I1216 04:12:19.339939  442720 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending
	I1216 04:12:19.339942  442720 system_pods.go:61] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending
	I1216 04:12:19.339948  442720 system_pods.go:74] duration metric: took 11.281108ms to wait for pod list to return data ...
	I1216 04:12:19.339958  442720 default_sa.go:34] waiting for default service account to be created ...
	I1216 04:12:19.345302  442720 default_sa.go:45] found service account: "default"
	I1216 04:12:19.345332  442720 default_sa.go:55] duration metric: took 5.367009ms for default service account to be created ...
	I1216 04:12:19.345343  442720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 04:12:19.361587  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:19.361625  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:19.361633  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending
	I1216 04:12:19.361638  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending
	I1216 04:12:19.361642  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending
	I1216 04:12:19.361646  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:19.361651  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:19.361655  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:19.361660  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:19.361671  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending
	I1216 04:12:19.361675  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:19.361680  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:19.361687  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending
	I1216 04:12:19.361691  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending
	I1216 04:12:19.361695  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending
	I1216 04:12:19.361706  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending
	I1216 04:12:19.361710  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending
	I1216 04:12:19.361714  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending
	I1216 04:12:19.361719  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending
	I1216 04:12:19.361728  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending
	I1216 04:12:19.361743  442720 retry.go:31] will retry after 259.306788ms: missing components: kube-dns
	I1216 04:12:19.504587  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:19.505044  442720 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:12:19.505080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:19.603299  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:19.666117  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:19.666157  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:19.666168  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:19.666173  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending
	I1216 04:12:19.666180  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending
	I1216 04:12:19.666183  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:19.666188  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:19.666192  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:19.666197  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:19.666202  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending
	I1216 04:12:19.666211  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:19.666215  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:19.666219  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending
	I1216 04:12:19.666226  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending
	I1216 04:12:19.666232  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:19.666246  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:19.666250  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending
	I1216 04:12:19.666258  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:19.666267  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending
	I1216 04:12:19.666271  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending
	I1216 04:12:19.666285  442720 retry.go:31] will retry after 330.370387ms: missing components: kube-dns
	I1216 04:12:19.712049  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:19.958354  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:19.960328  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:20.008251  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:20.008303  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:20.008314  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:20.008322  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:12:20.008329  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:12:20.008335  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:20.008341  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:20.008346  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:20.008353  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:20.008362  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:12:20.008371  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:20.008377  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:20.008383  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:12:20.008393  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:12:20.008400  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:20.008413  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:20.008422  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:12:20.008431  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.008439  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.008445  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:12:20.008463  442720 retry.go:31] will retry after 417.545915ms: missing components: kube-dns
	I1216 04:12:20.059767  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:20.197096  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:20.433352  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:20.433389  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:20.433398  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:20.433406  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:12:20.433412  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:12:20.433429  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:20.433438  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:20.433452  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:20.433456  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:20.433471  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:12:20.433475  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:20.433480  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:20.433491  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:12:20.433498  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:12:20.433503  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:20.433509  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:20.433519  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:12:20.433528  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.433539  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.433543  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Running
	I1216 04:12:20.433563  442720 retry.go:31] will retry after 567.761058ms: missing components: kube-dns
	I1216 04:12:20.452468  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:20.455435  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:20.550099  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:20.695755  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:20.965526  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:20.982007  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:21.067474  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:21.069650  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:21.069681  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Running
	I1216 04:12:21.069697  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:21.069705  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:12:21.069715  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:12:21.069719  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:21.069724  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:21.069728  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:21.069733  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:21.069740  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:12:21.069752  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:21.069766  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:21.069780  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:12:21.069787  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:12:21.069799  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:21.069806  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:21.069817  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:12:21.069823  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:21.069836  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:21.069840  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Running
	I1216 04:12:21.069849  442720 system_pods.go:126] duration metric: took 1.724500102s to wait for k8s-apps to be running ...
	I1216 04:12:21.069860  442720 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 04:12:21.069916  442720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:12:21.088629  442720 system_svc.go:56] duration metric: took 18.759227ms WaitForService to wait for kubelet
	I1216 04:12:21.088658  442720 kubeadm.go:587] duration metric: took 43.711489975s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:12:21.088676  442720 node_conditions.go:102] verifying NodePressure condition ...
	I1216 04:12:21.091946  442720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 04:12:21.091978  442720 node_conditions.go:123] node cpu capacity is 2
	I1216 04:12:21.091994  442720 node_conditions.go:105] duration metric: took 3.31277ms to run NodePressure ...
	I1216 04:12:21.092008  442720 start.go:242] waiting for startup goroutines ...
	I1216 04:12:21.196331  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:21.451780  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:21.452525  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:21.549636  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:21.696197  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:21.951359  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:21.951799  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:22.049981  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:22.195301  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:22.452046  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:22.452671  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:22.549571  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:22.695948  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:22.954407  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:22.954681  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:23.050165  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:23.195412  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:23.452527  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:23.453112  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:23.549959  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:23.696314  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:23.954405  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:23.954772  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:24.050080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:24.195970  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:24.452774  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:24.452934  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:24.549638  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:24.695791  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:24.953700  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:24.953821  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:25.049931  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:25.196023  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:25.452095  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:25.452716  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:25.549752  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:25.696509  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:25.952028  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:25.952186  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:26.056080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:26.195772  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:26.452022  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:26.452185  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:26.549263  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:26.696589  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:26.952501  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:26.952972  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:27.050590  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:27.196319  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:27.451532  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:27.451742  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:27.549539  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:27.696313  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:27.952214  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:27.953484  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:28.053505  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:28.196212  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:28.453089  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:28.453511  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:28.549523  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:28.696764  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:28.952519  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:28.952777  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:29.050457  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:29.196105  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:29.452540  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:29.452754  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:29.550072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:29.696323  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:29.962999  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:29.963381  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:30.062033  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:30.195972  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:30.452615  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:30.453119  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:30.550205  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:30.696711  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:30.953288  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:30.953812  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:31.049331  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:31.195329  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:31.452841  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:31.453172  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:31.550378  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:31.696424  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:31.956398  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:31.956865  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:32.050202  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:32.196394  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:32.452210  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:32.452569  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:32.549593  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:32.696194  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:32.952953  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:32.953402  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:33.049838  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:33.196374  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:33.453020  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:33.453735  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:33.550209  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:33.695889  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:33.952932  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:33.953561  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:34.050502  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:34.195539  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:34.452146  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:34.452621  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:34.550272  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:34.696150  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:34.952754  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:34.952962  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:35.050480  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:35.204544  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:35.452017  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:35.453239  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:35.549338  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:35.696237  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:35.955369  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:35.957180  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:36.050300  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:36.196450  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:36.451468  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:36.452618  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:36.551258  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:36.695699  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:36.954606  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:36.955047  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:37.066404  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:37.204875  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:37.454535  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:37.455098  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:37.554857  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:37.696397  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:37.952434  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:37.952834  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:38.069178  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:38.195590  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:38.451714  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:38.451912  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:38.549761  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:38.696421  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:38.953194  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:38.953325  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:39.051329  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:39.195876  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:39.453525  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:39.453758  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:39.549928  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:39.695348  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:39.951139  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:39.951326  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:40.055122  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:40.202978  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:40.454335  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:40.454454  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:40.550470  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:40.703482  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:40.952698  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:40.953002  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:41.050552  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:41.207624  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:41.454664  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:41.455148  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:41.550421  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:41.696513  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:41.951422  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:41.952935  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:42.050665  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:42.196651  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:42.452704  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:42.454331  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:42.550179  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:42.695870  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:42.952197  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:42.952369  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:43.050331  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:43.197003  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:43.453158  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:43.454610  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:43.549861  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:43.695990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:43.951333  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:43.952380  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:44.051033  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:44.200058  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:44.452814  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:44.453679  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:44.549961  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:44.696080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:44.955272  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:44.955414  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:45.062864  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:45.197531  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:45.450958  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:45.451501  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:45.549692  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:45.696214  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:45.953082  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:45.953385  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:46.050169  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:46.196573  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:46.452422  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:46.452781  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:46.550075  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:46.695617  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:46.951594  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:46.951749  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:47.049669  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:47.196762  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:47.453712  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:47.454113  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:47.550620  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:47.696398  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:47.951540  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:47.951677  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:48.050167  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:48.195921  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:48.451012  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:48.451801  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:48.549685  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:48.696369  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:48.951796  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:48.953332  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:49.049367  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:49.195875  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:49.452525  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:49.453500  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:49.549668  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:49.696683  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:49.951935  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:49.952619  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:50.050294  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:50.196721  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:50.451360  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:50.451517  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:50.552306  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:50.696987  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:50.953461  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:50.953962  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:51.052450  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:51.195919  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:51.455857  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:51.456317  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:51.550030  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:51.698670  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:51.955867  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:51.957163  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:52.049667  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:52.197918  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:52.460124  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:52.460752  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:52.551430  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:52.696128  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:52.954185  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:52.954758  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:53.050083  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:53.196459  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:53.453974  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:53.454268  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:53.548856  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:53.695607  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:53.952120  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:53.952260  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:54.049349  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:54.196199  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:54.450576  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:54.452772  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:54.550225  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:54.695932  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:54.950637  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:54.952934  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:55.049945  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:55.194999  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:55.451075  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:55.451163  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:55.550235  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:55.695550  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:55.951728  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:55.951873  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:56.049676  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:56.204297  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:56.451598  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:56.451978  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:56.550258  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:56.695539  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:56.952761  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:56.953326  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:57.049597  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:57.195480  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:57.453022  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:57.453219  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:57.550240  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:57.695903  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:57.951655  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:57.951750  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:58.050254  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:58.195784  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:58.452984  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:58.465641  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:58.549738  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:58.696666  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:58.953134  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:58.953588  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:59.049730  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:59.197838  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:59.453369  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:59.454403  442720 kapi.go:107] duration metric: took 1m15.507202389s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 04:12:59.550016  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:59.699171  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:59.951318  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:00.083804  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:00.200403  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:00.455827  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:00.551935  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:00.713277  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:00.951622  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:01.052498  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:01.196007  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:01.453397  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:01.560692  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:01.695782  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:01.951010  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:02.050008  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:02.196885  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:02.452092  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:02.550320  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:02.695966  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:02.951782  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:03.049770  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:03.197175  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:03.451869  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:03.549851  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:03.695514  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:03.952165  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:04.049206  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:04.195835  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:04.452526  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:04.551826  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:04.696027  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:04.951523  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:05.049710  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:05.197620  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:05.452313  442720 kapi.go:107] duration metric: took 1m21.504311633s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 04:13:05.549482  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:05.695848  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:06.049870  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:06.195165  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:06.550226  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:06.751530  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:07.049784  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:07.198133  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:07.552509  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:07.697373  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:08.049983  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:08.195777  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:08.549953  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:08.695133  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:09.050362  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:09.196072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:09.558969  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:09.696165  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:10.055881  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:10.197196  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:10.549554  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:10.696633  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:11.052078  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:11.195551  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:11.550301  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:11.696356  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:12.049804  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:12.196382  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:12.549635  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:12.695843  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:13.050455  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:13.197497  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:13.550720  442720 kapi.go:107] duration metric: took 1m26.004500081s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 04:13:13.553722  442720 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-266389 cluster.
	I1216 04:13:13.556524  442720 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 04:13:13.559370  442720 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 04:13:13.696974  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:14.196571  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:14.695393  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:15.196887  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:15.695636  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:16.196306  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:16.696765  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:17.195793  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:17.695639  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:18.196062  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:18.697181  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:19.195530  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:19.696431  442720 kapi.go:107] duration metric: took 1m35.50449885s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 04:13:19.699674  442720 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, registry-creds, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1216 04:13:19.702616  442720 addons.go:530] duration metric: took 1m42.325021089s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner-rancher inspektor-gadget ingress-dns storage-provisioner metrics-server registry-creds yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1216 04:13:19.702678  442720 start.go:247] waiting for cluster config update ...
	I1216 04:13:19.702717  442720 start.go:256] writing updated cluster config ...
	I1216 04:13:19.703056  442720 ssh_runner.go:195] Run: rm -f paused
	I1216 04:13:19.708771  442720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:13:19.712568  442720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6mwzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.719163  442720 pod_ready.go:94] pod "coredns-66bc5c9577-6mwzd" is "Ready"
	I1216 04:13:19.719194  442720 pod_ready.go:86] duration metric: took 6.591998ms for pod "coredns-66bc5c9577-6mwzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.721614  442720 pod_ready.go:83] waiting for pod "etcd-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.726525  442720 pod_ready.go:94] pod "etcd-addons-266389" is "Ready"
	I1216 04:13:19.726555  442720 pod_ready.go:86] duration metric: took 4.913779ms for pod "etcd-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.729000  442720 pod_ready.go:83] waiting for pod "kube-apiserver-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.734150  442720 pod_ready.go:94] pod "kube-apiserver-addons-266389" is "Ready"
	I1216 04:13:19.734180  442720 pod_ready.go:86] duration metric: took 5.153748ms for pod "kube-apiserver-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.736976  442720 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.113693  442720 pod_ready.go:94] pod "kube-controller-manager-addons-266389" is "Ready"
	I1216 04:13:20.113744  442720 pod_ready.go:86] duration metric: took 376.73701ms for pod "kube-controller-manager-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.316333  442720 pod_ready.go:83] waiting for pod "kube-proxy-qjxqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.714266  442720 pod_ready.go:94] pod "kube-proxy-qjxqh" is "Ready"
	I1216 04:13:20.714307  442720 pod_ready.go:86] duration metric: took 397.947561ms for pod "kube-proxy-qjxqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.913995  442720 pod_ready.go:83] waiting for pod "kube-scheduler-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:21.312759  442720 pod_ready.go:94] pod "kube-scheduler-addons-266389" is "Ready"
	I1216 04:13:21.312786  442720 pod_ready.go:86] duration metric: took 398.765416ms for pod "kube-scheduler-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:21.312799  442720 pod_ready.go:40] duration metric: took 1.603995293s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:13:21.372470  442720 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 04:13:21.375862  442720 out.go:179] * Done! kubectl is now configured to use "addons-266389" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 04:15:56 addons-266389 crio[828]: time="2025-12-16T04:15:56.038848751Z" level=info msg="Removed container 6f93aea8a90c1e65ec76cdc5928ada434f6667f382d3e884c8d60f04de0959d0: kube-system/registry-creds-764b6fb674-7cfhx/registry-creds" id=8ec87e18-77c7-4973-9740-393c59e64d72 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.236316675Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-qdmhx/POD" id=bb2aab3c-e191-4b08-8ed5-ebe60ca6bce7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.236383687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.257801037Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qdmhx Namespace:default ID:d55f99220f6a360cf2c98ae184c7170f609c575d859a345908e5a02ab7a1bc05 UID:c57142e0-66f0-40bf-aaae-ce6d3d03e620 NetNS:/var/run/netns/57f1ee9c-a276-4e10-8fdb-56130cd45fe3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079640}] Aliases:map[]}"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.257972961Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-qdmhx to CNI network \"kindnet\" (type=ptp)"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.276067579Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qdmhx Namespace:default ID:d55f99220f6a360cf2c98ae184c7170f609c575d859a345908e5a02ab7a1bc05 UID:c57142e0-66f0-40bf-aaae-ce6d3d03e620 NetNS:/var/run/netns/57f1ee9c-a276-4e10-8fdb-56130cd45fe3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079640}] Aliases:map[]}"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.276219564Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-qdmhx for CNI network kindnet (type=ptp)"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.285763436Z" level=info msg="Ran pod sandbox d55f99220f6a360cf2c98ae184c7170f609c575d859a345908e5a02ab7a1bc05 with infra container: default/hello-world-app-5d498dc89-qdmhx/POD" id=bb2aab3c-e191-4b08-8ed5-ebe60ca6bce7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.287199996Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b9dd7105-f24f-4840-a6ff-183596397c26 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.287333634Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=b9dd7105-f24f-4840-a6ff-183596397c26 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.287372732Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=b9dd7105-f24f-4840-a6ff-183596397c26 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.290747961Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=8de72c2f-a8f7-4091-a53d-f0fc70e7df70 name=/runtime.v1.ImageService/PullImage
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.294610456Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.951948255Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=8de72c2f-a8f7-4091-a53d-f0fc70e7df70 name=/runtime.v1.ImageService/PullImage
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.952854791Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=fdf0006b-6e49-4c93-9886-91aa932fe01c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.960211967Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9bd949fe-fa71-4189-a95d-c0539a7d869c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.972527337Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-qdmhx/hello-world-app" id=8a15916c-b6a0-41d9-b3c3-41163ddc6277 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.972834671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.987005239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.987383778Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5fbbaf9e9b506ba0dc4795e503d9f34177ab005c284f878ced8002bfc7e9c09e/merged/etc/passwd: no such file or directory"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.988770803Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5fbbaf9e9b506ba0dc4795e503d9f34177ab005c284f878ced8002bfc7e9c09e/merged/etc/group: no such file or directory"
	Dec 16 04:16:29 addons-266389 crio[828]: time="2025-12-16T04:16:29.989228653Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:16:30 addons-266389 crio[828]: time="2025-12-16T04:16:30.059649643Z" level=info msg="Created container bedb980035cba3c2506ea06afff7a3416765989d7d4c4508f2fcf1596a03c40f: default/hello-world-app-5d498dc89-qdmhx/hello-world-app" id=8a15916c-b6a0-41d9-b3c3-41163ddc6277 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:16:30 addons-266389 crio[828]: time="2025-12-16T04:16:30.06659536Z" level=info msg="Starting container: bedb980035cba3c2506ea06afff7a3416765989d7d4c4508f2fcf1596a03c40f" id=3d64083a-694d-46dc-8eb4-5fe017b8ab75 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 04:16:30 addons-266389 crio[828]: time="2025-12-16T04:16:30.078625353Z" level=info msg="Started container" PID=7034 containerID=bedb980035cba3c2506ea06afff7a3416765989d7d4c4508f2fcf1596a03c40f description=default/hello-world-app-5d498dc89-qdmhx/hello-world-app id=3d64083a-694d-46dc-8eb4-5fe017b8ab75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d55f99220f6a360cf2c98ae184c7170f609c575d859a345908e5a02ab7a1bc05
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	bedb980035cba       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   d55f99220f6a3       hello-world-app-5d498dc89-qdmhx             default
	0afa9f5e8f70b       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             35 seconds ago           Exited              registry-creds                           4                   0d36f44dde762       registry-creds-764b6fb674-7cfhx             kube-system
	d0aaf3ac41350       10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4                                                                             2 minutes ago            Running             nginx                                    0                   550cf213b8704       nginx                                       default
	106a996d5d6db       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   44b2c79c4e368       busybox                                     default
	12223ad132387       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   b91e84b66173f       csi-hostpathplugin-4cntk                    kube-system
	0b4f3c5e893d7       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   b91e84b66173f       csi-hostpathplugin-4cntk                    kube-system
	c9070f308fd86       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   b91e84b66173f       csi-hostpathplugin-4cntk                    kube-system
	48496242e59c5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   b91e84b66173f       csi-hostpathplugin-4cntk                    kube-system
	a7474c7cb9f49       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   127900e5f166a       gcp-auth-78565c9fb4-lzbjd                   gcp-auth
	a222cf8717975       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   b91e84b66173f       csi-hostpathplugin-4cntk                    kube-system
	c7eedac774bd3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   6c01c39b9f4d7       gadget-w7z9q                                gadget
	4c5dffefd81cd       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   9d9eb964ff234       ingress-nginx-controller-85d4c799dd-hbrzj   ingress-nginx
	52a17616824e6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   57f19482f7475       registry-proxy-k95mm                        kube-system
	3efc9d422c0c3       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   e6f26d96a71d3       nvidia-device-plugin-daemonset-pj9b6        kube-system
	6e3be5772ff86       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   5fb5baa3e63aa       metrics-server-85b7d694d7-5q887             kube-system
	6e142dfc84916       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   b91e84b66173f       csi-hostpathplugin-4cntk                    kube-system
	4da4c59550ee3       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   a0fc26bc203c1       csi-hostpath-resizer-0                      kube-system
	66770881f17c9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   ac15221f20c5d       snapshot-controller-7d9fbc56b8-t752l        kube-system
	9939f0868fb7f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   325089d857753       local-path-provisioner-648f6765c9-wpj9t     local-path-storage
	84135c3563dc8       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   0d01c988792ec       registry-6b586f9694-6fhfq                   kube-system
	179d32a34b981       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   76c447629ef9d       cloud-spanner-emulator-5bdddb765-z56bg      default
	c3264da7d66b8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              patch                                    0                   2f2576a92cab9       ingress-nginx-admission-patch-8m974         ingress-nginx
	698b79e9ff28b       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   795d45c083f00       kube-ingress-dns-minikube                   kube-system
	63eba54ed2b9b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   7265206ba3b3c       csi-hostpath-attacher-0                     kube-system
	8b24d28c9cf9a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   cadcf9d984087       snapshot-controller-7d9fbc56b8-4ppgw        kube-system
	c56201a9b5ad7       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   89b8fe93f4571       yakd-dashboard-5ff678cb9-vt9kv              yakd-dashboard
	51949d99c72d1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   4 minutes ago            Exited              create                                   0                   e79ac9856339a       ingress-nginx-admission-create-n7d4f        ingress-nginx
	b3d0766b0e4db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   e954c6def2f36       coredns-66bc5c9577-6mwzd                    kube-system
	198a5f79252ec       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   96e8b2944b892       storage-provisioner                         kube-system
	71f0cfb9d9516       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   2731a7b865e7a       kube-proxy-qjxqh                            kube-system
	cb4b75c762835       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   08a103110ce8d       kindnet-b74jx                               kube-system
	9e53dfcedc5ae       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   2dfdf6c9f85dd       kube-controller-manager-addons-266389       kube-system
	4f4977c8f895c       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   1af2faf775e7b       kube-scheduler-addons-266389                kube-system
	6fd0cf07fb532       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   082537ad4aec4       kube-apiserver-addons-266389                kube-system
	d27466cb0ef32       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   4e9cbe27e2bb7       etcd-addons-266389                          kube-system
	
	
	==> coredns [b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7] <==
	[INFO] 10.244.0.17:43091 - 54870 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002393074s
	[INFO] 10.244.0.17:43091 - 57104 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000147447s
	[INFO] 10.244.0.17:43091 - 23109 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000250464s
	[INFO] 10.244.0.17:49326 - 18940 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154808s
	[INFO] 10.244.0.17:49326 - 18495 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000276081s
	[INFO] 10.244.0.17:43249 - 63453 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114504s
	[INFO] 10.244.0.17:43249 - 62984 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163432s
	[INFO] 10.244.0.17:60253 - 28489 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111189s
	[INFO] 10.244.0.17:60253 - 28275 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000142828s
	[INFO] 10.244.0.17:58532 - 29703 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006377136s
	[INFO] 10.244.0.17:58532 - 30181 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006519439s
	[INFO] 10.244.0.17:57867 - 35577 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014355s
	[INFO] 10.244.0.17:57867 - 35757 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321291s
	[INFO] 10.244.0.21:33194 - 59766 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000169675s
	[INFO] 10.244.0.21:42409 - 61510 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000136543s
	[INFO] 10.244.0.21:33764 - 50073 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000178906s
	[INFO] 10.244.0.21:54747 - 44674 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143329s
	[INFO] 10.244.0.21:33825 - 466 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134434s
	[INFO] 10.244.0.21:43453 - 61701 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087156s
	[INFO] 10.244.0.21:33515 - 22580 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002190742s
	[INFO] 10.244.0.21:48036 - 14052 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001781416s
	[INFO] 10.244.0.21:39909 - 51309 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004799882s
	[INFO] 10.244.0.21:40339 - 64010 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001750729s
	[INFO] 10.244.0.23:33740 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000262583s
	[INFO] 10.244.0.23:48097 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102672s
	
	
	==> describe nodes <==
	Name:               addons-266389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-266389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=addons-266389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T04_11_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-266389
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-266389"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 04:11:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-266389
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 04:16:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 04:15:56 +0000   Tue, 16 Dec 2025 04:11:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 04:15:56 +0000   Tue, 16 Dec 2025 04:11:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 04:15:56 +0000   Tue, 16 Dec 2025 04:11:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 04:15:56 +0000   Tue, 16 Dec 2025 04:12:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-266389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b01d95696b577408f2b2782693c8bc0
	  System UUID:                ca615f09-a740-47f8-928c-e2f0056267cb
	  Boot ID:                    e72ece1f-d416-4c20-8564-468e8b5f7888
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     cloud-spanner-emulator-5bdddb765-z56bg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  default                     hello-world-app-5d498dc89-qdmhx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  gadget                      gadget-w7z9q                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  gcp-auth                    gcp-auth-78565c9fb4-lzbjd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-hbrzj    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m47s
	  kube-system                 coredns-66bc5c9577-6mwzd                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m53s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-4cntk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 etcd-addons-266389                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m59s
	  kube-system                 kindnet-b74jx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m55s
	  kube-system                 kube-apiserver-addons-266389                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-controller-manager-addons-266389        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-qjxqh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-scheduler-addons-266389                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 metrics-server-85b7d694d7-5q887              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m48s
	  kube-system                 nvidia-device-plugin-daemonset-pj9b6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-6b586f9694-6fhfq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 registry-creds-764b6fb674-7cfhx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 registry-proxy-k95mm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 snapshot-controller-7d9fbc56b8-4ppgw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 snapshot-controller-7d9fbc56b8-t752l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  local-path-storage          local-path-provisioner-648f6765c9-wpj9t      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vt9kv               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m51s                kube-proxy       
	  Normal   Starting                 5m5s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node addons-266389 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node addons-266389 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m5s (x8 over 5m5s)  kubelet          Node addons-266389 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m59s                kubelet          Node addons-266389 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m59s                kubelet          Node addons-266389 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m59s                kubelet          Node addons-266389 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m55s                node-controller  Node addons-266389 event: Registered Node addons-266389 in Controller
	  Normal   NodeReady                4m11s                kubelet          Node addons-266389 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 01:17] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014643] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.519830] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df] <==
	{"level":"warn","ts":"2025-12-16T04:11:27.757471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.781869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.813233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.836099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.857648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.877665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.894069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.909813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.933368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.945483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.966585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.985034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.014663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.032488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.069614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.104797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.119788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.143217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.212373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:44.528588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:44.546775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:05.975105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:05.989773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:06.026447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:06.040960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37658","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a7474c7cb9f49060f42bfcb5204a0c64c8c19f1d15cd53cd9d307abfe50b208c] <==
	2025/12/16 04:13:13 GCP Auth Webhook started!
	2025/12/16 04:13:21 Ready to marshal response ...
	2025/12/16 04:13:21 Ready to write response ...
	2025/12/16 04:13:22 Ready to marshal response ...
	2025/12/16 04:13:22 Ready to write response ...
	2025/12/16 04:13:22 Ready to marshal response ...
	2025/12/16 04:13:22 Ready to write response ...
	2025/12/16 04:13:43 Ready to marshal response ...
	2025/12/16 04:13:43 Ready to write response ...
	2025/12/16 04:13:45 Ready to marshal response ...
	2025/12/16 04:13:45 Ready to write response ...
	2025/12/16 04:13:46 Ready to marshal response ...
	2025/12/16 04:13:46 Ready to write response ...
	2025/12/16 04:13:53 Ready to marshal response ...
	2025/12/16 04:13:53 Ready to write response ...
	2025/12/16 04:13:55 Ready to marshal response ...
	2025/12/16 04:13:55 Ready to write response ...
	2025/12/16 04:14:11 Ready to marshal response ...
	2025/12/16 04:14:11 Ready to write response ...
	2025/12/16 04:14:16 Ready to marshal response ...
	2025/12/16 04:14:16 Ready to write response ...
	2025/12/16 04:16:28 Ready to marshal response ...
	2025/12/16 04:16:28 Ready to write response ...
	
	
	==> kernel <==
	 04:16:31 up  2:58,  0 user,  load average: 0.47, 1.63, 1.66
	Linux addons-266389 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e] <==
	I1216 04:14:28.722372       1 main.go:301] handling current node
	I1216 04:14:38.718440       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:14:38.718514       1 main.go:301] handling current node
	I1216 04:14:48.718861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:14:48.718977       1 main.go:301] handling current node
	I1216 04:14:58.721785       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:14:58.721825       1 main.go:301] handling current node
	I1216 04:15:08.722017       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:15:08.722135       1 main.go:301] handling current node
	I1216 04:15:18.718975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:15:18.719092       1 main.go:301] handling current node
	I1216 04:15:28.727473       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:15:28.727511       1 main.go:301] handling current node
	I1216 04:15:38.725236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:15:38.725295       1 main.go:301] handling current node
	I1216 04:15:48.720084       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:15:48.720136       1 main.go:301] handling current node
	I1216 04:15:58.720249       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:15:58.720408       1 main.go:301] handling current node
	I1216 04:16:08.718895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:16:08.718936       1 main.go:301] handling current node
	I1216 04:16:18.720210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:16:18.720255       1 main.go:301] handling current node
	I1216 04:16:28.725820       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:16:28.725860       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628] <==
	E1216 04:12:19.252023       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.21.2:443: connect: connection refused" logger="UnhandledError"
	W1216 04:12:43.008850       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:12:43.008909       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 04:12:43.008924       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 04:12:43.009883       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:12:43.009965       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 04:12:43.009980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 04:13:03.228617       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:13:03.228729       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1216 04:13:03.229773       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.150.152:443: connect: connection refused" logger="UnhandledError"
	E1216 04:13:03.234514       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.150.152:443: connect: connection refused" logger="UnhandledError"
	E1216 04:13:03.238557       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.150.152:443: connect: connection refused" logger="UnhandledError"
	I1216 04:13:03.378893       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 04:13:32.355652       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34174: use of closed network connection
	E1216 04:13:32.613738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34216: use of closed network connection
	I1216 04:14:05.080019       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 04:14:10.867519       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 04:14:11.216633       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.38.69"}
	I1216 04:16:29.073248       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.71.110"}
	
	
	==> kube-controller-manager [9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3] <==
	I1216 04:11:35.991824       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 04:11:35.992020       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 04:11:35.992409       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 04:11:35.992447       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 04:11:35.992467       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 04:11:35.997756       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 04:11:35.998558       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:11:35.998593       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 04:11:35.998611       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 04:11:35.998643       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 04:11:35.998660       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 04:11:35.998664       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 04:11:35.998669       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 04:11:36.013482       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-266389" podCIDRs=["10.244.0.0/24"]
	E1216 04:11:42.203244       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 04:12:05.967585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 04:12:05.967735       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 04:12:05.967791       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 04:12:06.013704       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 04:12:06.018767       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 04:12:06.068876       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:12:06.119503       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:12:20.947256       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1216 04:12:36.074560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 04:12:36.128227       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e] <==
	I1216 04:11:38.448784       1 server_linux.go:53] "Using iptables proxy"
	I1216 04:11:38.562763       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 04:11:38.663730       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 04:11:38.663767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 04:11:38.663836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 04:11:38.920269       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 04:11:38.920322       1 server_linux.go:132] "Using iptables Proxier"
	I1216 04:11:38.927163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 04:11:38.927460       1 server.go:527] "Version info" version="v1.34.2"
	I1216 04:11:38.927480       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 04:11:38.929923       1 config.go:200] "Starting service config controller"
	I1216 04:11:38.929945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 04:11:38.929965       1 config.go:106] "Starting endpoint slice config controller"
	I1216 04:11:38.929969       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 04:11:38.929982       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 04:11:38.929986       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 04:11:38.930591       1 config.go:309] "Starting node config controller"
	I1216 04:11:38.930600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 04:11:38.930606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 04:11:39.030023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 04:11:39.030107       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 04:11:39.030120       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d] <==
	E1216 04:11:29.051441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 04:11:29.051480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:11:29.051530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 04:11:29.051567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 04:11:29.051599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:11:29.051690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:11:29.051731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:11:29.051762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:11:29.051792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:11:29.052157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:11:29.052206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:11:29.057133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1216 04:11:29.057242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:11:29.864925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 04:11:29.992778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:11:30.023493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:11:30.033019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:11:30.102190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:11:30.148234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:11:30.203407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:11:30.215409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:11:30.241794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 04:11:30.251481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:11:30.451515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1216 04:11:33.630810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 04:15:15 addons-266389 kubelet[1273]: E1216 04:15:15.870767    1273 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7cfhx_kube-system(d035c106-cbd0-4064-b23f-d8d1762768a2)\"" pod="kube-system/registry-creds-764b6fb674-7cfhx" podUID="d035c106-cbd0-4064-b23f-d8d1762768a2"
	Dec 16 04:15:29 addons-266389 kubelet[1273]: I1216 04:15:29.604512    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7cfhx" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:15:29 addons-266389 kubelet[1273]: I1216 04:15:29.605040    1273 scope.go:117] "RemoveContainer" containerID="6f93aea8a90c1e65ec76cdc5928ada434f6667f382d3e884c8d60f04de0959d0"
	Dec 16 04:15:29 addons-266389 kubelet[1273]: E1216 04:15:29.605373    1273 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7cfhx_kube-system(d035c106-cbd0-4064-b23f-d8d1762768a2)\"" pod="kube-system/registry-creds-764b6fb674-7cfhx" podUID="d035c106-cbd0-4064-b23f-d8d1762768a2"
	Dec 16 04:15:30 addons-266389 kubelet[1273]: I1216 04:15:30.604205    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k95mm" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:15:36 addons-266389 kubelet[1273]: I1216 04:15:36.604657    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-6fhfq" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:15:42 addons-266389 kubelet[1273]: I1216 04:15:42.603675    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7cfhx" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:15:42 addons-266389 kubelet[1273]: I1216 04:15:42.603747    1273 scope.go:117] "RemoveContainer" containerID="6f93aea8a90c1e65ec76cdc5928ada434f6667f382d3e884c8d60f04de0959d0"
	Dec 16 04:15:42 addons-266389 kubelet[1273]: E1216 04:15:42.604117    1273 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7cfhx_kube-system(d035c106-cbd0-4064-b23f-d8d1762768a2)\"" pod="kube-system/registry-creds-764b6fb674-7cfhx" podUID="d035c106-cbd0-4064-b23f-d8d1762768a2"
	Dec 16 04:15:46 addons-266389 kubelet[1273]: I1216 04:15:46.604004    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-pj9b6" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:15:55 addons-266389 kubelet[1273]: I1216 04:15:55.604062    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7cfhx" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:15:55 addons-266389 kubelet[1273]: I1216 04:15:55.604625    1273 scope.go:117] "RemoveContainer" containerID="6f93aea8a90c1e65ec76cdc5928ada434f6667f382d3e884c8d60f04de0959d0"
	Dec 16 04:15:56 addons-266389 kubelet[1273]: I1216 04:15:56.022136    1273 scope.go:117] "RemoveContainer" containerID="6f93aea8a90c1e65ec76cdc5928ada434f6667f382d3e884c8d60f04de0959d0"
	Dec 16 04:15:56 addons-266389 kubelet[1273]: I1216 04:15:56.023255    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7cfhx" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:15:56 addons-266389 kubelet[1273]: I1216 04:15:56.023305    1273 scope.go:117] "RemoveContainer" containerID="0afa9f5e8f70bcc8bf5869aa6bec47633d5a7e36723b3fdcf335749aaf3b8aa3"
	Dec 16 04:15:56 addons-266389 kubelet[1273]: E1216 04:15:56.023473    1273 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7cfhx_kube-system(d035c106-cbd0-4064-b23f-d8d1762768a2)\"" pod="kube-system/registry-creds-764b6fb674-7cfhx" podUID="d035c106-cbd0-4064-b23f-d8d1762768a2"
	Dec 16 04:16:08 addons-266389 kubelet[1273]: I1216 04:16:08.604641    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7cfhx" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:16:08 addons-266389 kubelet[1273]: I1216 04:16:08.604708    1273 scope.go:117] "RemoveContainer" containerID="0afa9f5e8f70bcc8bf5869aa6bec47633d5a7e36723b3fdcf335749aaf3b8aa3"
	Dec 16 04:16:08 addons-266389 kubelet[1273]: E1216 04:16:08.604880    1273 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7cfhx_kube-system(d035c106-cbd0-4064-b23f-d8d1762768a2)\"" pod="kube-system/registry-creds-764b6fb674-7cfhx" podUID="d035c106-cbd0-4064-b23f-d8d1762768a2"
	Dec 16 04:16:20 addons-266389 kubelet[1273]: I1216 04:16:20.604530    1273 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7cfhx" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:16:20 addons-266389 kubelet[1273]: I1216 04:16:20.604606    1273 scope.go:117] "RemoveContainer" containerID="0afa9f5e8f70bcc8bf5869aa6bec47633d5a7e36723b3fdcf335749aaf3b8aa3"
	Dec 16 04:16:20 addons-266389 kubelet[1273]: E1216 04:16:20.604768    1273 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7cfhx_kube-system(d035c106-cbd0-4064-b23f-d8d1762768a2)\"" pod="kube-system/registry-creds-764b6fb674-7cfhx" podUID="d035c106-cbd0-4064-b23f-d8d1762768a2"
	Dec 16 04:16:29 addons-266389 kubelet[1273]: I1216 04:16:29.078997    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxrr\" (UniqueName: \"kubernetes.io/projected/c57142e0-66f0-40bf-aaae-ce6d3d03e620-kube-api-access-lsxrr\") pod \"hello-world-app-5d498dc89-qdmhx\" (UID: \"c57142e0-66f0-40bf-aaae-ce6d3d03e620\") " pod="default/hello-world-app-5d498dc89-qdmhx"
	Dec 16 04:16:29 addons-266389 kubelet[1273]: I1216 04:16:29.079570    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c57142e0-66f0-40bf-aaae-ce6d3d03e620-gcp-creds\") pod \"hello-world-app-5d498dc89-qdmhx\" (UID: \"c57142e0-66f0-40bf-aaae-ce6d3d03e620\") " pod="default/hello-world-app-5d498dc89-qdmhx"
	Dec 16 04:16:29 addons-266389 kubelet[1273]: W1216 04:16:29.284375    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/crio-d55f99220f6a360cf2c98ae184c7170f609c575d859a345908e5a02ab7a1bc05 WatchSource:0}: Error finding container d55f99220f6a360cf2c98ae184c7170f609c575d859a345908e5a02ab7a1bc05: Status 404 returned error can't find the container with id d55f99220f6a360cf2c98ae184c7170f609c575d859a345908e5a02ab7a1bc05
	
	
	==> storage-provisioner [198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b] <==
	W1216 04:16:07.101681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:09.104441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:09.109001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:11.112157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:11.116573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:13.119651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:13.124042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:15.128343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:15.133977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:17.136842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:17.141699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:19.144850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:19.151371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:21.154737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:21.159277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:23.162309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:23.168765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:25.171766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:25.176345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:27.179389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:27.183970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:29.189645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:29.202591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:31.206693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:16:31.213759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-266389 -n addons-266389
helpers_test.go:270: (dbg) Run:  kubectl --context addons-266389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-266389 describe pod ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-266389 describe pod ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974: exit status 1 (82.62801ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-n7d4f" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8m974" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-266389 describe pod ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (309.086788ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:16:32.177668  452086 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:16:32.178493  452086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:16:32.178547  452086 out.go:374] Setting ErrFile to fd 2...
	I1216 04:16:32.178568  452086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:16:32.178980  452086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:16:32.179387  452086 mustload.go:66] Loading cluster: addons-266389
	I1216 04:16:32.180131  452086 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:16:32.180185  452086 addons.go:622] checking whether the cluster is paused
	I1216 04:16:32.180715  452086 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:16:32.180783  452086 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:16:32.181436  452086 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:16:32.200036  452086 ssh_runner.go:195] Run: systemctl --version
	I1216 04:16:32.200095  452086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:16:32.227590  452086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:16:32.343124  452086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:16:32.343246  452086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:16:32.399222  452086 cri.go:89] found id: "0afa9f5e8f70bcc8bf5869aa6bec47633d5a7e36723b3fdcf335749aaf3b8aa3"
	I1216 04:16:32.399242  452086 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:16:32.399248  452086 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:16:32.399252  452086 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:16:32.399255  452086 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:16:32.399259  452086 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:16:32.399262  452086 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:16:32.399266  452086 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:16:32.399269  452086 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:16:32.399277  452086 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:16:32.399280  452086 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:16:32.399283  452086 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:16:32.399286  452086 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:16:32.399289  452086 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:16:32.399292  452086 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:16:32.399298  452086 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:16:32.399302  452086 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:16:32.399306  452086 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:16:32.399309  452086 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:16:32.399312  452086 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:16:32.399317  452086 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:16:32.399320  452086 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:16:32.399323  452086 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:16:32.399326  452086 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:16:32.399329  452086 cri.go:89] found id: ""
	I1216 04:16:32.399384  452086 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:16:32.419349  452086 out.go:203] 
	W1216 04:16:32.422291  452086 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:16:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:16:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:16:32.422381  452086 out.go:285] * 
	* 
	W1216 04:16:32.429771  452086 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:16:32.432792  452086 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable ingress --alsologtostderr -v=1: exit status 11 (274.84363ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:16:32.503450  452207 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:16:32.505364  452207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:16:32.505389  452207 out.go:374] Setting ErrFile to fd 2...
	I1216 04:16:32.505396  452207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:16:32.505725  452207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:16:32.506106  452207 mustload.go:66] Loading cluster: addons-266389
	I1216 04:16:32.506566  452207 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:16:32.506590  452207 addons.go:622] checking whether the cluster is paused
	I1216 04:16:32.506743  452207 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:16:32.506762  452207 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:16:32.507432  452207 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:16:32.524962  452207 ssh_runner.go:195] Run: systemctl --version
	I1216 04:16:32.525031  452207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:16:32.544677  452207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:16:32.644219  452207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:16:32.644373  452207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:16:32.678432  452207 cri.go:89] found id: "0afa9f5e8f70bcc8bf5869aa6bec47633d5a7e36723b3fdcf335749aaf3b8aa3"
	I1216 04:16:32.678457  452207 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:16:32.678462  452207 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:16:32.678466  452207 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:16:32.678470  452207 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:16:32.678474  452207 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:16:32.678478  452207 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:16:32.678481  452207 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:16:32.678484  452207 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:16:32.678491  452207 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:16:32.678494  452207 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:16:32.678498  452207 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:16:32.678501  452207 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:16:32.678505  452207 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:16:32.678509  452207 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:16:32.678515  452207 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:16:32.678519  452207 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:16:32.678523  452207 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:16:32.678526  452207 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:16:32.678529  452207 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:16:32.678540  452207 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:16:32.678544  452207 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:16:32.678560  452207 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:16:32.678563  452207 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:16:32.678567  452207 cri.go:89] found id: ""
	I1216 04:16:32.678620  452207 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:16:32.694399  452207 out.go:203] 
	W1216 04:16:32.697358  452207 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:16:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:16:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:16:32.697390  452207 out.go:285] * 
	* 
	W1216 04:16:32.703017  452207 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:16:32.706026  452207 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (142.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-w7z9q" [834570fa-024f-437a-b95d-d9b439a6e3d7] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003823897s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (257.800194ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:14:10.340682  450314 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:14:10.341437  450314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:10.341455  450314 out.go:374] Setting ErrFile to fd 2...
	I1216 04:14:10.341463  450314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:10.341889  450314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:14:10.342332  450314 mustload.go:66] Loading cluster: addons-266389
	I1216 04:14:10.342968  450314 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:10.342991  450314 addons.go:622] checking whether the cluster is paused
	I1216 04:14:10.343167  450314 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:10.343189  450314 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:14:10.343923  450314 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:14:10.361858  450314 ssh_runner.go:195] Run: systemctl --version
	I1216 04:14:10.361917  450314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:14:10.379505  450314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:14:10.475853  450314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:14:10.475957  450314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:14:10.506776  450314 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:14:10.506798  450314 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:14:10.506803  450314 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:14:10.506814  450314 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:14:10.506818  450314 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:14:10.506822  450314 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:14:10.506825  450314 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:14:10.506829  450314 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:14:10.506832  450314 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:14:10.506838  450314 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:14:10.506847  450314 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:14:10.506850  450314 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:14:10.506853  450314 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:14:10.506856  450314 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:14:10.506860  450314 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:14:10.506865  450314 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:14:10.506872  450314 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:14:10.506876  450314 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:14:10.506879  450314 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:14:10.506882  450314 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:14:10.506887  450314 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:14:10.506890  450314 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:14:10.506893  450314 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:14:10.506896  450314 cri.go:89] found id: ""
	I1216 04:14:10.506952  450314 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:14:10.524504  450314 out.go:203] 
	W1216 04:14:10.527550  450314 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:14:10.527581  450314 out.go:285] * 
	* 
	W1216 04:14:10.533334  450314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:14:10.536388  450314 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 6.425342ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003683758s
addons_test.go:465: (dbg) Run:  kubectl --context addons-266389 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (279.855676ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:14:04.053125  450178 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:14:04.053828  450178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:04.053849  450178 out.go:374] Setting ErrFile to fd 2...
	I1216 04:14:04.053856  450178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:04.054207  450178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:14:04.054654  450178 mustload.go:66] Loading cluster: addons-266389
	I1216 04:14:04.055154  450178 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:04.055179  450178 addons.go:622] checking whether the cluster is paused
	I1216 04:14:04.055334  450178 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:04.055353  450178 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:14:04.055952  450178 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:14:04.078485  450178 ssh_runner.go:195] Run: systemctl --version
	I1216 04:14:04.078554  450178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:14:04.102250  450178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:14:04.204231  450178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:14:04.204375  450178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:14:04.240914  450178 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:14:04.240982  450178 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:14:04.240996  450178 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:14:04.241001  450178 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:14:04.241005  450178 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:14:04.241009  450178 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:14:04.241012  450178 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:14:04.241016  450178 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:14:04.241019  450178 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:14:04.241026  450178 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:14:04.241033  450178 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:14:04.241036  450178 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:14:04.241040  450178 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:14:04.241043  450178 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:14:04.241047  450178 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:14:04.241090  450178 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:14:04.241099  450178 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:14:04.241104  450178 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:14:04.241107  450178 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:14:04.241111  450178 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:14:04.241116  450178 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:14:04.241119  450178 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:14:04.241122  450178 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:14:04.241125  450178 cri.go:89] found id: ""
	I1216 04:14:04.241180  450178 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:14:04.260050  450178 out.go:203] 
	W1216 04:14:04.264047  450178 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:14:04.264108  450178 out.go:285] * 
	* 
	W1216 04:14:04.269865  450178 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:14:04.273504  450178 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (30.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 04:13:54.387901  441727 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 04:13:54.392106  441727 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 04:13:54.392171  441727 kapi.go:107] duration metric: took 4.280821ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.293145ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-266389 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-266389 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [bc56fcf7-5127-4597-973c-489e1d96f3d1] Pending
helpers_test.go:353: "task-pv-pod" [bc56fcf7-5127-4597-973c-489e1d96f3d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [bc56fcf7-5127-4597-973c-489e1d96f3d1] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003630661s
addons_test.go:574: (dbg) Run:  kubectl --context addons-266389 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-266389 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-266389 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-266389 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-266389 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-266389 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-266389 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [99ab65a0-f911-40d9-9b87-124171c78e28] Pending
helpers_test.go:353: "task-pv-pod-restore" [99ab65a0-f911-40d9-9b87-124171c78e28] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [99ab65a0-f911-40d9-9b87-124171c78e28] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003442087s
addons_test.go:616: (dbg) Run:  kubectl --context addons-266389 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-266389 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-266389 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (253.707137ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:14:24.128960  450814 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:14:24.129751  450814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:24.129793  450814 out.go:374] Setting ErrFile to fd 2...
	I1216 04:14:24.129814  450814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:24.130121  450814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:14:24.130467  450814 mustload.go:66] Loading cluster: addons-266389
	I1216 04:14:24.130917  450814 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:24.130956  450814 addons.go:622] checking whether the cluster is paused
	I1216 04:14:24.131102  450814 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:24.131133  450814 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:14:24.131669  450814 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:14:24.150781  450814 ssh_runner.go:195] Run: systemctl --version
	I1216 04:14:24.150848  450814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:14:24.169198  450814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:14:24.264228  450814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:14:24.264334  450814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:14:24.294364  450814 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:14:24.294385  450814 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:14:24.294390  450814 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:14:24.294394  450814 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:14:24.294397  450814 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:14:24.294401  450814 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:14:24.294404  450814 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:14:24.294408  450814 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:14:24.294411  450814 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:14:24.294420  450814 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:14:24.294424  450814 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:14:24.294428  450814 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:14:24.294431  450814 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:14:24.294435  450814 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:14:24.294438  450814 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:14:24.294456  450814 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:14:24.294464  450814 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:14:24.294469  450814 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:14:24.294473  450814 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:14:24.294476  450814 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:14:24.294481  450814 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:14:24.294484  450814 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:14:24.294487  450814 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:14:24.294490  450814 cri.go:89] found id: ""
	I1216 04:14:24.294546  450814 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:14:24.309899  450814 out.go:203] 
	W1216 04:14:24.312770  450814 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:14:24.312804  450814 out.go:285] * 
	* 
	W1216 04:14:24.318401  450814 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:14:24.321284  450814 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (259.170247ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:14:24.378118  450857 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:14:24.379160  450857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:24.379178  450857 out.go:374] Setting ErrFile to fd 2...
	I1216 04:14:24.379187  450857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:14:24.379479  450857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:14:24.379791  450857 mustload.go:66] Loading cluster: addons-266389
	I1216 04:14:24.380216  450857 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:24.380236  450857 addons.go:622] checking whether the cluster is paused
	I1216 04:14:24.380343  450857 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:14:24.380356  450857 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:14:24.380863  450857 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:14:24.398895  450857 ssh_runner.go:195] Run: systemctl --version
	I1216 04:14:24.398959  450857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:14:24.419508  450857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:14:24.515650  450857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:14:24.515732  450857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:14:24.547500  450857 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:14:24.547526  450857 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:14:24.547531  450857 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:14:24.547535  450857 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:14:24.547538  450857 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:14:24.547542  450857 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:14:24.547545  450857 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:14:24.547548  450857 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:14:24.547552  450857 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:14:24.547558  450857 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:14:24.547562  450857 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:14:24.547565  450857 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:14:24.547568  450857 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:14:24.547572  450857 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:14:24.547575  450857 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:14:24.547581  450857 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:14:24.547584  450857 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:14:24.547588  450857 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:14:24.547592  450857 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:14:24.547594  450857 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:14:24.547599  450857 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:14:24.547610  450857 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:14:24.547614  450857 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:14:24.547616  450857 cri.go:89] found id: ""
	I1216 04:14:24.547665  450857 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:14:24.568853  450857 out.go:203] 
	W1216 04:14:24.571874  450857 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:14:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:14:24.571902  450857 out.go:285] * 
	* 
	W1216 04:14:24.577559  450857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:14:24.580596  450857 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (30.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-266389 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-266389 --alsologtostderr -v=1: exit status 11 (321.363651ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:54.081889  449349 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:54.083872  449349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:54.083939  449349 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:54.083972  449349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:54.084301  449349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:54.084710  449349 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:54.085368  449349 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:54.085452  449349 addons.go:622] checking whether the cluster is paused
	I1216 04:13:54.085627  449349 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:54.085661  449349 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:54.086251  449349 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:54.115757  449349 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:54.115827  449349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:54.144213  449349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:54.241005  449349 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:54.241136  449349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:54.271020  449349 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:54.271044  449349 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:54.271049  449349 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:54.271054  449349 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:54.271057  449349 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:54.271063  449349 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:54.271066  449349 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:54.271070  449349 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:54.271073  449349 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:54.271084  449349 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:54.271088  449349 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:54.271091  449349 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:54.271094  449349 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:54.271097  449349 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:54.271100  449349 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:54.271110  449349 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:54.271117  449349 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:54.271124  449349 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:54.271127  449349 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:54.271130  449349 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:54.271134  449349 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:54.271137  449349 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:54.271141  449349 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:54.271144  449349 cri.go:89] found id: ""
	I1216 04:13:54.271199  449349 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:54.286747  449349 out.go:203] 
	W1216 04:13:54.289906  449349 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:54.289970  449349 out.go:285] * 
	* 
	W1216 04:13:54.296499  449349 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:54.299484  449349 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-266389 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-266389
helpers_test.go:244: (dbg) docker inspect addons-266389:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987",
	        "Created": "2025-12-16T04:11:08.406545814Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 443105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:11:08.475077028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/hosts",
	        "LogPath": "/var/lib/docker/containers/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987-json.log",
	        "Name": "/addons-266389",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-266389:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-266389",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987",
	                "LowerDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de2d89a3bc2dae47cbf1a7f9b8b171048ebc2184f6036d5dde9eb8a2da6951c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-266389",
	                "Source": "/var/lib/docker/volumes/addons-266389/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-266389",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-266389",
	                "name.minikube.sigs.k8s.io": "addons-266389",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f19a4df5d96066478ebc4cc4326cda23338db4fcd77a621c569300f63befa945",
	            "SandboxKey": "/var/run/docker/netns/f19a4df5d960",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-266389": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:1b:50:b8:c7:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6eef94a8007f7ed82f36cde36f08b7467c5fc8984713511ba3a7c8bb1ab8afa",
	                    "EndpointID": "89f20aab0f197d1ea7f984566ad3de075f8447cc2d02920181350c670158b91a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-266389",
	                        "9c3b592c224e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-266389 -n addons-266389
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-266389 logs -n 25: (1.767728763s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-218041                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-218041   │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ start   │ -o=json --download-only -p download-only-956467 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-956467   │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-956467                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-956467   │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-229746                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-229746   │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-218041                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-218041   │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-956467                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-956467   │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ start   │ --download-only -p download-docker-461022 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-461022 │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ delete  │ -p download-docker-461022                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-461022 │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ start   │ --download-only -p binary-mirror-260964 --alsologtostderr --binary-mirror http://127.0.0.1:39905 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-260964   │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ delete  │ -p binary-mirror-260964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-260964   │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:11 UTC │
	│ addons  │ enable dashboard -p addons-266389                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-266389                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │                     │
	│ start   │ -p addons-266389 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:11 UTC │ 16 Dec 25 04:13 UTC │
	│ addons  │ addons-266389 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ ip      │ addons-266389 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │ 16 Dec 25 04:13 UTC │
	│ addons  │ addons-266389 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ ssh     │ addons-266389 ssh cat /opt/local-path-provisioner/pvc-12852da6-9e8a-4765-8a93-15cde56a9879_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │ 16 Dec 25 04:13 UTC │
	│ addons  │ addons-266389 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-266389 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	│ addons  │ addons-266389 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266389          │ jenkins │ v1.37.0 │ 16 Dec 25 04:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:11:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:11:01.618400  442720 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:11:01.618611  442720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:01.618639  442720 out.go:374] Setting ErrFile to fd 2...
	I1216 04:11:01.618658  442720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:11:01.618961  442720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:11:01.619492  442720 out.go:368] Setting JSON to false
	I1216 04:11:01.620382  442720 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10408,"bootTime":1765847854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:11:01.620485  442720 start.go:143] virtualization:  
	I1216 04:11:01.624197  442720 out.go:179] * [addons-266389] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:11:01.627504  442720 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:11:01.627599  442720 notify.go:221] Checking for updates...
	I1216 04:11:01.633683  442720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:11:01.636692  442720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:11:01.640202  442720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:11:01.643173  442720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:11:01.646230  442720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:11:01.649339  442720 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:11:01.688515  442720 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:11:01.688634  442720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:01.742025  442720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:11:01.733059509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:01.742137  442720 docker.go:319] overlay module found
	I1216 04:11:01.745300  442720 out.go:179] * Using the docker driver based on user configuration
	I1216 04:11:01.748234  442720 start.go:309] selected driver: docker
	I1216 04:11:01.748255  442720 start.go:927] validating driver "docker" against <nil>
	I1216 04:11:01.748268  442720 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:11:01.748997  442720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:11:01.820098  442720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:11:01.810549634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:11:01.820271  442720 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:11:01.820522  442720 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:11:01.823556  442720 out.go:179] * Using Docker driver with root privileges
	I1216 04:11:01.826668  442720 cni.go:84] Creating CNI manager for ""
	I1216 04:11:01.826747  442720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:11:01.826759  442720 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:11:01.826839  442720 start.go:353] cluster config:
	{Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 04:11:01.830189  442720 out.go:179] * Starting "addons-266389" primary control-plane node in "addons-266389" cluster
	I1216 04:11:01.833039  442720 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:11:01.836179  442720 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:11:01.839074  442720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:11:01.839127  442720 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 04:11:01.839169  442720 cache.go:65] Caching tarball of preloaded images
	I1216 04:11:01.839247  442720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:11:01.839262  442720 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:11:01.839274  442720 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:11:01.839632  442720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/config.json ...
	I1216 04:11:01.839654  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/config.json: {Name:mk928b082baefcda33cbb318ef9234c1ac520635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:01.859858  442720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:11:01.859881  442720 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:11:01.859901  442720 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:11:01.859935  442720 start.go:360] acquireMachinesLock for addons-266389: {Name:mk82ef214a88a1269a11e23e2aa5197425e975a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:11:01.860043  442720 start.go:364] duration metric: took 86.105µs to acquireMachinesLock for "addons-266389"
	I1216 04:11:01.860075  442720 start.go:93] Provisioning new machine with config: &{Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:11:01.860146  442720 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:11:01.863619  442720 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1216 04:11:01.863887  442720 start.go:159] libmachine.API.Create for "addons-266389" (driver="docker")
	I1216 04:11:01.863925  442720 client.go:173] LocalClient.Create starting
	I1216 04:11:01.864042  442720 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem
	I1216 04:11:01.977945  442720 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem
	I1216 04:11:02.042171  442720 cli_runner.go:164] Run: docker network inspect addons-266389 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:11:02.059779  442720 cli_runner.go:211] docker network inspect addons-266389 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:11:02.059871  442720 network_create.go:284] running [docker network inspect addons-266389] to gather additional debugging logs...
	I1216 04:11:02.059892  442720 cli_runner.go:164] Run: docker network inspect addons-266389
	W1216 04:11:02.076169  442720 cli_runner.go:211] docker network inspect addons-266389 returned with exit code 1
	I1216 04:11:02.076226  442720 network_create.go:287] error running [docker network inspect addons-266389]: docker network inspect addons-266389: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-266389 not found
	I1216 04:11:02.076241  442720 network_create.go:289] output of [docker network inspect addons-266389]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-266389 not found
	
	** /stderr **
	I1216 04:11:02.076344  442720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:02.093057  442720 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ba760}
	I1216 04:11:02.093117  442720 network_create.go:124] attempt to create docker network addons-266389 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 04:11:02.093181  442720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-266389 addons-266389
	I1216 04:11:02.154824  442720 network_create.go:108] docker network addons-266389 192.168.49.0/24 created
	I1216 04:11:02.154856  442720 kic.go:121] calculated static IP "192.168.49.2" for the "addons-266389" container
	I1216 04:11:02.154953  442720 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:11:02.171697  442720 cli_runner.go:164] Run: docker volume create addons-266389 --label name.minikube.sigs.k8s.io=addons-266389 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:11:02.188990  442720 oci.go:103] Successfully created a docker volume addons-266389
	I1216 04:11:02.189257  442720 cli_runner.go:164] Run: docker run --rm --name addons-266389-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-266389 --entrypoint /usr/bin/test -v addons-266389:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:11:04.340423  442720 cli_runner.go:217] Completed: docker run --rm --name addons-266389-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-266389 --entrypoint /usr/bin/test -v addons-266389:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib: (2.151123081s)
	I1216 04:11:04.340456  442720 oci.go:107] Successfully prepared a docker volume addons-266389
	I1216 04:11:04.340504  442720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:11:04.340518  442720 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:11:04.340592  442720 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-266389:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:11:08.318344  442720 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-266389:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.977709177s)
	I1216 04:11:08.318382  442720 kic.go:203] duration metric: took 3.977860333s to extract preloaded images to volume ...
	W1216 04:11:08.318528  442720 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:11:08.318643  442720 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:11:08.390990  442720 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-266389 --name addons-266389 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-266389 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-266389 --network addons-266389 --ip 192.168.49.2 --volume addons-266389:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:11:08.713456  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Running}}
	I1216 04:11:08.734404  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:08.759829  442720 cli_runner.go:164] Run: docker exec addons-266389 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:11:08.817569  442720 oci.go:144] the created container "addons-266389" has a running status.
	I1216 04:11:08.817598  442720 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa...
	I1216 04:11:09.050275  442720 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:11:09.081399  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:09.107862  442720 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:11:09.107880  442720 kic_runner.go:114] Args: [docker exec --privileged addons-266389 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:11:09.167433  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:09.193351  442720 machine.go:94] provisionDockerMachine start ...
	I1216 04:11:09.193449  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:09.219033  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:09.219399  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:09.219409  442720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:11:09.220052  442720 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58428->127.0.0.1:33133: read: connection reset by peer
	I1216 04:11:12.356940  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266389
	
	I1216 04:11:12.356975  442720 ubuntu.go:182] provisioning hostname "addons-266389"
	I1216 04:11:12.357045  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:12.375362  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:12.375677  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:12.375694  442720 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-266389 && echo "addons-266389" | sudo tee /etc/hostname
	I1216 04:11:12.519476  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266389
	
	I1216 04:11:12.519586  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:12.537725  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:12.538050  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:12.538076  442720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-266389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-266389/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-266389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:11:12.669282  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:11:12.669310  442720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:11:12.669332  442720 ubuntu.go:190] setting up certificates
	I1216 04:11:12.669348  442720 provision.go:84] configureAuth start
	I1216 04:11:12.669413  442720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-266389
	I1216 04:11:12.688045  442720 provision.go:143] copyHostCerts
	I1216 04:11:12.688128  442720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:11:12.688265  442720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:11:12.688327  442720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:11:12.688391  442720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.addons-266389 san=[127.0.0.1 192.168.49.2 addons-266389 localhost minikube]
	I1216 04:11:12.892884  442720 provision.go:177] copyRemoteCerts
	I1216 04:11:12.892960  442720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:11:12.893011  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:12.910010  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.008547  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 04:11:13.028110  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:11:13.046294  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:11:13.064187  442720 provision.go:87] duration metric: took 394.81974ms to configureAuth
	I1216 04:11:13.064215  442720 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:11:13.064423  442720 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:11:13.064530  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.082250  442720 main.go:143] libmachine: Using SSH client type: native
	I1216 04:11:13.082564  442720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1216 04:11:13.082584  442720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:11:13.523886  442720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:11:13.523954  442720 machine.go:97] duration metric: took 4.330582197s to provisionDockerMachine
	I1216 04:11:13.523981  442720 client.go:176] duration metric: took 11.6600486s to LocalClient.Create
	I1216 04:11:13.524028  442720 start.go:167] duration metric: took 11.660127658s to libmachine.API.Create "addons-266389"
	I1216 04:11:13.524054  442720 start.go:293] postStartSetup for "addons-266389" (driver="docker")
	I1216 04:11:13.524078  442720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:11:13.524184  442720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:11:13.524270  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.542554  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.641578  442720 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:11:13.645355  442720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:11:13.645384  442720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:11:13.645396  442720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:11:13.645470  442720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:11:13.645511  442720 start.go:296] duration metric: took 121.437067ms for postStartSetup
	I1216 04:11:13.645846  442720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-266389
	I1216 04:11:13.665774  442720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/config.json ...
	I1216 04:11:13.666088  442720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:11:13.666142  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.683640  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.778684  442720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:11:13.783664  442720 start.go:128] duration metric: took 11.923501736s to createHost
	I1216 04:11:13.783687  442720 start.go:83] releasing machines lock for "addons-266389", held for 11.923629934s
	I1216 04:11:13.783756  442720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-266389
	I1216 04:11:13.801186  442720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:11:13.801261  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.801493  442720 ssh_runner.go:195] Run: cat /version.json
	I1216 04:11:13.801537  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:13.828452  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:13.829910  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:14.014366  442720 ssh_runner.go:195] Run: systemctl --version
	I1216 04:11:14.021120  442720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:11:14.067580  442720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:11:14.072206  442720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:11:14.072314  442720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:11:14.107285  442720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:11:14.107320  442720 start.go:496] detecting cgroup driver to use...
	I1216 04:11:14.107378  442720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:11:14.107457  442720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:11:14.125666  442720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:11:14.138359  442720 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:11:14.138467  442720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:11:14.156397  442720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:11:14.175606  442720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:11:14.299643  442720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:11:14.419483  442720 docker.go:234] disabling docker service ...
	I1216 04:11:14.419552  442720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:11:14.444416  442720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:11:14.457800  442720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:11:14.572505  442720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:11:14.693554  442720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:11:14.707822  442720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:11:14.723294  442720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:11:14.723415  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.732798  442720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:11:14.732923  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.742029  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.750586  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.759604  442720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:11:14.767481  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.776443  442720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.790173  442720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:11:14.799199  442720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:11:14.806986  442720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:11:14.814637  442720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:14.926134  442720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:11:15.117578  442720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:11:15.117672  442720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:11:15.121839  442720 start.go:564] Will wait 60s for crictl version
	I1216 04:11:15.121946  442720 ssh_runner.go:195] Run: which crictl
	I1216 04:11:15.125981  442720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:11:15.159054  442720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:11:15.159187  442720 ssh_runner.go:195] Run: crio --version
	I1216 04:11:15.194445  442720 ssh_runner.go:195] Run: crio --version
	I1216 04:11:15.228816  442720 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 04:11:15.231600  442720 cli_runner.go:164] Run: docker network inspect addons-266389 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:11:15.247753  442720 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:11:15.251792  442720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:15.261795  442720 kubeadm.go:884] updating cluster {Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:11:15.261911  442720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:11:15.261973  442720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:15.303415  442720 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:11:15.303442  442720 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:11:15.303497  442720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:11:15.328121  442720 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:11:15.328145  442720 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:11:15.328153  442720 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1216 04:11:15.328253  442720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-266389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:11:15.328344  442720 ssh_runner.go:195] Run: crio config
	I1216 04:11:15.385689  442720 cni.go:84] Creating CNI manager for ""
	I1216 04:11:15.385710  442720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:11:15.385730  442720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:11:15.385773  442720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-266389 NodeName:addons-266389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:11:15.385928  442720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-266389"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:11:15.386002  442720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 04:11:15.393911  442720 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:11:15.393986  442720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:11:15.401743  442720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 04:11:15.414301  442720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 04:11:15.427771  442720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1216 04:11:15.440648  442720 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:11:15.444292  442720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:11:15.454860  442720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:15.574219  442720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:15.593745  442720 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389 for IP: 192.168.49.2
	I1216 04:11:15.593771  442720 certs.go:195] generating shared ca certs ...
	I1216 04:11:15.593788  442720 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:15.593991  442720 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:11:16.288935  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt ...
	I1216 04:11:16.288971  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt: {Name:mkef7cc8e40cf9cf18882fc19685f38beb3555c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.289180  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key ...
	I1216 04:11:16.289194  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key: {Name:mk95ea541e007c7a661178f6b17e1b58b4611c6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.289282  442720 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:11:16.538924  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt ...
	I1216 04:11:16.538960  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt: {Name:mk467ffb251f3855bd5f201ad1a531b5d81ec1b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.539148  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key ...
	I1216 04:11:16.539161  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key: {Name:mkfc5e95d42754d910610cfe88527f26994e5612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.539245  442720 certs.go:257] generating profile certs ...
	I1216 04:11:16.539310  442720 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.key
	I1216 04:11:16.539326  442720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt with IP's: []
	I1216 04:11:16.934282  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt ...
	I1216 04:11:16.934318  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: {Name:mk722f651548c20b8e386acd15601cc2b9235cd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.934510  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.key ...
	I1216 04:11:16.934524  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.key: {Name:mk974ebc87a89355626c1d66a8f9a00bb589e1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:16.934613  442720 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e
	I1216 04:11:16.934632  442720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 04:11:17.147468  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e ...
	I1216 04:11:17.147503  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e: {Name:mk082d56ec7a26652ba27537bb6baa1777f23918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.147684  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e ...
	I1216 04:11:17.147704  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e: {Name:mkd71827f418350327e2411ff753dff35207360e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.147809  442720 certs.go:382] copying /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt.34fef09e -> /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt
	I1216 04:11:17.147898  442720 certs.go:386] copying /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key.34fef09e -> /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key
	I1216 04:11:17.147951  442720 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key
	I1216 04:11:17.147973  442720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt with IP's: []
	I1216 04:11:17.375348  442720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt ...
	I1216 04:11:17.375379  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt: {Name:mk05692efb75cf03d41d0c1f39bc0a2b14ef23e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.375556  442720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key ...
	I1216 04:11:17.375570  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key: {Name:mk4b7f58427ab539dd343313509f686816ea3d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:17.375799  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:11:17.375846  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:11:17.375879  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:11:17.375909  442720 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:11:17.376592  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:11:17.395029  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:11:17.412856  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:11:17.431212  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:11:17.449283  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 04:11:17.467280  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:11:17.485053  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:11:17.503016  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 04:11:17.521340  442720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:11:17.539164  442720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:11:17.552410  442720 ssh_runner.go:195] Run: openssl version
	I1216 04:11:17.558830  442720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.566721  442720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:11:17.574660  442720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.578437  442720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.578502  442720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:11:17.620579  442720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:11:17.628308  442720 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:11:17.636469  442720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:11:17.640339  442720 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:11:17.640390  442720 kubeadm.go:401] StartCluster: {Name:addons-266389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-266389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:11:17.640461  442720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:11:17.640540  442720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:11:17.667266  442720 cri.go:89] found id: ""
	I1216 04:11:17.667387  442720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:11:17.675136  442720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:11:17.682929  442720 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:11:17.682993  442720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:11:17.690795  442720 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:11:17.690817  442720 kubeadm.go:158] found existing configuration files:
	
	I1216 04:11:17.690889  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:11:17.698705  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:11:17.698770  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:11:17.705799  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:11:17.713044  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:11:17.713181  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:11:17.720495  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:11:17.728032  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:11:17.728097  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:11:17.735299  442720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:11:17.742937  442720 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:11:17.742999  442720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:11:17.750293  442720 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:11:17.787269  442720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 04:11:17.787363  442720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:11:17.823254  442720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:11:17.823328  442720 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:11:17.823368  442720 kubeadm.go:319] OS: Linux
	I1216 04:11:17.823424  442720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:11:17.823484  442720 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:11:17.823539  442720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:11:17.823591  442720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:11:17.823647  442720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:11:17.823701  442720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:11:17.823749  442720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:11:17.823800  442720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:11:17.823849  442720 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:11:17.905734  442720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:11:17.905889  442720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:11:17.906025  442720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:11:17.914672  442720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:11:17.919039  442720 out.go:252]   - Generating certificates and keys ...
	I1216 04:11:17.919159  442720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:11:17.919239  442720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:11:18.132156  442720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:11:18.826033  442720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:11:19.319790  442720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:11:19.827992  442720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:11:20.080061  442720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:11:20.081684  442720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-266389 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:11:20.132428  442720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:11:20.132746  442720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-266389 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:11:20.266705  442720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:11:21.021209  442720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:11:21.581413  442720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:11:21.581703  442720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:11:22.095204  442720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:11:22.540202  442720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:11:23.210649  442720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:11:23.903829  442720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:11:24.135916  442720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:11:24.136484  442720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:11:24.139810  442720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:11:24.143250  442720 out.go:252]   - Booting up control plane ...
	I1216 04:11:24.143360  442720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:11:24.143437  442720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:11:24.144899  442720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:11:24.160619  442720 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:11:24.160969  442720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:11:24.169758  442720 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:11:24.170072  442720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:11:24.170365  442720 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:11:24.305587  442720 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:11:24.305712  442720 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:11:25.299921  442720 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000669394s
	I1216 04:11:25.303657  442720 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 04:11:25.303752  442720 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1216 04:11:25.303850  442720 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 04:11:25.303937  442720 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 04:11:27.645654  442720 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.341374418s
	I1216 04:11:29.058180  442720 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.754443072s
	I1216 04:11:30.805369  442720 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501539521s
	I1216 04:11:30.837124  442720 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 04:11:30.858867  442720 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 04:11:30.877394  442720 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 04:11:30.877877  442720 kubeadm.go:319] [mark-control-plane] Marking the node addons-266389 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 04:11:30.889770  442720 kubeadm.go:319] [bootstrap-token] Using token: lcp9n3.z0gj24q1nalp0g4f
	I1216 04:11:30.892718  442720 out.go:252]   - Configuring RBAC rules ...
	I1216 04:11:30.892852  442720 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 04:11:30.899218  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 04:11:30.907983  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 04:11:30.912496  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 04:11:30.916621  442720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 04:11:30.920796  442720 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 04:11:31.213333  442720 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 04:11:31.655878  442720 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 04:11:32.211912  442720 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 04:11:32.213108  442720 kubeadm.go:319] 
	I1216 04:11:32.213187  442720 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 04:11:32.213210  442720 kubeadm.go:319] 
	I1216 04:11:32.213294  442720 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 04:11:32.213302  442720 kubeadm.go:319] 
	I1216 04:11:32.213328  442720 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 04:11:32.213390  442720 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 04:11:32.213445  442720 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 04:11:32.213453  442720 kubeadm.go:319] 
	I1216 04:11:32.213515  442720 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 04:11:32.213523  442720 kubeadm.go:319] 
	I1216 04:11:32.213571  442720 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 04:11:32.213578  442720 kubeadm.go:319] 
	I1216 04:11:32.213630  442720 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 04:11:32.213709  442720 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 04:11:32.213783  442720 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 04:11:32.213791  442720 kubeadm.go:319] 
	I1216 04:11:32.213875  442720 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 04:11:32.213956  442720 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 04:11:32.213963  442720 kubeadm.go:319] 
	I1216 04:11:32.214047  442720 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lcp9n3.z0gj24q1nalp0g4f \
	I1216 04:11:32.214154  442720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e08298e77cafc423d9b109ab7877d99e66f943a14d7b74758966013799c879bb \
	I1216 04:11:32.214177  442720 kubeadm.go:319] 	--control-plane 
	I1216 04:11:32.214181  442720 kubeadm.go:319] 
	I1216 04:11:32.214270  442720 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 04:11:32.214276  442720 kubeadm.go:319] 
	I1216 04:11:32.214359  442720 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lcp9n3.z0gj24q1nalp0g4f \
	I1216 04:11:32.214461  442720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e08298e77cafc423d9b109ab7877d99e66f943a14d7b74758966013799c879bb 
	I1216 04:11:32.218506  442720 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 04:11:32.218744  442720 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:11:32.218849  442720 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:11:32.218866  442720 cni.go:84] Creating CNI manager for ""
	I1216 04:11:32.218874  442720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:11:32.222331  442720 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 04:11:32.225240  442720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 04:11:32.229638  442720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 04:11:32.229661  442720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 04:11:32.244551  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 04:11:32.531564  442720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 04:11:32.531688  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:32.531784  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-266389 minikube.k8s.io/updated_at=2025_12_16T04_11_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=addons-266389 minikube.k8s.io/primary=true
	I1216 04:11:32.780880  442720 ops.go:34] apiserver oom_adj: -16
	I1216 04:11:32.780992  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:33.281832  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:33.781970  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:34.281971  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:34.781894  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:35.282078  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:35.781247  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:36.281181  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:36.781627  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:37.281853  442720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:11:37.376291  442720 kubeadm.go:1114] duration metric: took 4.844646206s to wait for elevateKubeSystemPrivileges
	I1216 04:11:37.376323  442720 kubeadm.go:403] duration metric: took 19.735936496s to StartCluster
	I1216 04:11:37.376341  442720 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:37.376453  442720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:11:37.376874  442720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:11:37.377129  442720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:11:37.377277  442720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 04:11:37.377545  442720 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:11:37.377588  442720 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 04:11:37.377668  442720 addons.go:70] Setting yakd=true in profile "addons-266389"
	I1216 04:11:37.377683  442720 addons.go:239] Setting addon yakd=true in "addons-266389"
	I1216 04:11:37.377706  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.378213  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.378470  442720 addons.go:70] Setting inspektor-gadget=true in profile "addons-266389"
	I1216 04:11:37.378499  442720 addons.go:239] Setting addon inspektor-gadget=true in "addons-266389"
	I1216 04:11:37.378521  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.378978  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.379465  442720 addons.go:70] Setting metrics-server=true in profile "addons-266389"
	I1216 04:11:37.379483  442720 addons.go:239] Setting addon metrics-server=true in "addons-266389"
	I1216 04:11:37.379503  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.379905  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.384071  442720 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-266389"
	I1216 04:11:37.384116  442720 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-266389"
	I1216 04:11:37.384169  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.384835  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385230  442720 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-266389"
	I1216 04:11:37.385361  442720 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-266389"
	I1216 04:11:37.385987  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.389823  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385523  442720 addons.go:70] Setting cloud-spanner=true in profile "addons-266389"
	I1216 04:11:37.390783  442720 addons.go:239] Setting addon cloud-spanner=true in "addons-266389"
	I1216 04:11:37.390986  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.391508  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385535  442720 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-266389"
	I1216 04:11:37.418169  442720 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-266389"
	I1216 04:11:37.418205  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.418704  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385547  442720 addons.go:70] Setting default-storageclass=true in profile "addons-266389"
	I1216 04:11:37.427450  442720 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-266389"
	I1216 04:11:37.385554  442720 addons.go:70] Setting gcp-auth=true in profile "addons-266389"
	I1216 04:11:37.441336  442720 mustload.go:66] Loading cluster: addons-266389
	I1216 04:11:37.441705  442720 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:11:37.442134  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.444485  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385560  442720 addons.go:70] Setting ingress=true in profile "addons-266389"
	I1216 04:11:37.467393  442720 addons.go:239] Setting addon ingress=true in "addons-266389"
	I1216 04:11:37.467507  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.468022  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385570  442720 addons.go:70] Setting ingress-dns=true in profile "addons-266389"
	I1216 04:11:37.481891  442720 addons.go:239] Setting addon ingress-dns=true in "addons-266389"
	I1216 04:11:37.481963  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.482684  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.385623  442720 out.go:179] * Verifying Kubernetes components...
	I1216 04:11:37.509141  442720 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 04:11:37.509370  442720 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 04:11:37.385803  442720 addons.go:70] Setting volcano=true in profile "addons-266389"
	I1216 04:11:37.385817  442720 addons.go:70] Setting registry=true in profile "addons-266389"
	I1216 04:11:37.385825  442720 addons.go:70] Setting registry-creds=true in profile "addons-266389"
	I1216 04:11:37.385835  442720 addons.go:70] Setting storage-provisioner=true in profile "addons-266389"
	I1216 04:11:37.385847  442720 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-266389"
	I1216 04:11:37.385897  442720 addons.go:70] Setting volumesnapshots=true in profile "addons-266389"
	I1216 04:11:37.518809  442720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:11:37.532741  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 04:11:37.532781  442720 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 04:11:37.532885  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.538694  442720 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 04:11:37.539309  442720 addons.go:239] Setting addon registry=true in "addons-266389"
	I1216 04:11:37.539451  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.540324  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.552513  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 04:11:37.552606  442720 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 04:11:37.552781  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.553776  442720 addons.go:239] Setting addon registry-creds=true in "addons-266389"
	I1216 04:11:37.553836  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.554481  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.554945  442720 addons.go:239] Setting addon volcano=true in "addons-266389"
	I1216 04:11:37.555000  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.564892  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.567270  442720 addons.go:239] Setting addon storage-provisioner=true in "addons-266389"
	I1216 04:11:37.567329  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.567955  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.574846  442720 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 04:11:37.593312  442720 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:11:37.593338  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 04:11:37.593437  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.597439  442720 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-266389"
	I1216 04:11:37.597963  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.612273  442720 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 04:11:37.615265  442720 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 04:11:37.615292  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 04:11:37.615370  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.618042  442720 addons.go:239] Setting addon volumesnapshots=true in "addons-266389"
	I1216 04:11:37.618103  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.618610  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.637245  442720 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 04:11:37.637678  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.640271  442720 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:11:37.640291  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 04:11:37.640353  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.641000  442720 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:11:37.641026  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 04:11:37.641183  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.660773  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 04:11:37.663915  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 04:11:37.669228  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 04:11:37.679599  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 04:11:37.681637  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 04:11:37.704051  442720 addons.go:239] Setting addon default-storageclass=true in "addons-266389"
	I1216 04:11:37.704091  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.704491  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.717021  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 04:11:37.717576  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.728585  442720 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 04:11:37.729114  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:11:37.739138  442720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 04:11:37.743221  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 04:11:37.775769  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:11:37.788982  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.790990  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 04:11:37.795379  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 04:11:37.801469  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 04:11:37.801499  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 04:11:37.801615  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.811643  442720 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:11:37.811741  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 04:11:37.811933  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.830765  442720 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:11:37.830795  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 04:11:37.830912  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.879360  442720 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-266389"
	I1216 04:11:37.879410  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:37.879831  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:37.887977  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.896775  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:37.898398  442720 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 04:11:37.922941  442720 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 04:11:37.926048  442720 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:11:37.926075  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 04:11:37.926168  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.937618  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:37.948515  442720 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 04:11:37.951478  442720 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 04:11:37.955085  442720 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 04:11:37.955110  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 04:11:37.955181  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.982840  442720 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:37.982903  442720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:11:37.982993  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:37.984790  442720 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 04:11:37.989051  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 04:11:37.989107  442720 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 04:11:37.989171  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:38.008253  442720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:11:38.011729  442720 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:38.011759  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:11:38.011846  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:38.052406  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.078035  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.111583  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.111928  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.122495  442720 out.go:179]   - Using image docker.io/busybox:stable
	I1216 04:11:38.126152  442720 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 04:11:38.130397  442720 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:11:38.130421  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 04:11:38.130489  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:38.131198  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.157373  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.160860  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:38.173910  442720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:11:38.176196  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:38.184638  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.184688  442720 retry.go:31] will retry after 155.541845ms: ssh: handshake failed: EOF
	W1216 04:11:38.184834  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.184844  442720 retry.go:31] will retry after 127.179581ms: ssh: handshake failed: EOF
	I1216 04:11:38.201215  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:38.202928  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.202966  442720 retry.go:31] will retry after 227.368976ms: ssh: handshake failed: EOF
	I1216 04:11:38.213269  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	W1216 04:11:38.345851  442720 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 04:11:38.345879  442720 retry.go:31] will retry after 504.257003ms: ssh: handshake failed: EOF
	I1216 04:11:38.526161  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 04:11:38.526181  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 04:11:38.583977  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 04:11:38.584045  442720 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 04:11:38.739168  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 04:11:38.739196  442720 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 04:11:38.831416  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 04:11:38.831446  442720 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 04:11:38.880782  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 04:11:38.934647  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 04:11:38.934689  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 04:11:38.947299  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:11:38.972222  442720 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 04:11:38.972251  442720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 04:11:38.973924  442720 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:11:38.973950  442720 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 04:11:38.976344  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:11:38.976489  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:11:39.025027  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:11:39.028602  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:11:39.047754  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:11:39.083060  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 04:11:39.083091  442720 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 04:11:39.088865  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:11:39.148946  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:11:39.156730  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 04:11:39.156759  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 04:11:39.163455  442720 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 04:11:39.163482  442720 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 04:11:39.167154  442720 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 04:11:39.167177  442720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 04:11:39.211201  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:11:39.281887  442720 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:11:39.281914  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 04:11:39.339569  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:11:39.340734  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 04:11:39.340758  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 04:11:39.387969  442720 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:11:39.387995  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 04:11:39.388302  442720 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 04:11:39.388315  442720 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 04:11:39.428935  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:11:39.439896  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 04:11:39.439969  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 04:11:39.529400  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 04:11:39.529487  442720 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 04:11:39.560858  442720 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 04:11:39.560934  442720 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 04:11:39.567396  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:11:39.741290  442720 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:11:39.741363  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 04:11:39.762045  442720 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.022869955s)
	I1216 04:11:39.762122  442720 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 04:11:39.763118  442720 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.589176011s)
	I1216 04:11:39.763793  442720 node_ready.go:35] waiting up to 6m0s for node "addons-266389" to be "Ready" ...
	I1216 04:11:39.806419  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 04:11:39.806500  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 04:11:40.109312  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 04:11:40.109378  442720 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 04:11:40.182430  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:11:40.266369  442720 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-266389" context rescaled to 1 replicas
	I1216 04:11:40.319228  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 04:11:40.319254  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 04:11:40.487048  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 04:11:40.487073  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 04:11:40.592564  442720 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:11:40.592600  442720 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 04:11:40.798855  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:11:41.155354  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.274495408s)
	I1216 04:11:41.155427  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.208102655s)
	W1216 04:11:41.774834  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:43.139076  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.162694263s)
	I1216 04:11:43.139294  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.162786309s)
	W1216 04:11:43.776802  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:43.938199  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.913131596s)
	I1216 04:11:43.938232  442720 addons.go:495] Verifying addon ingress=true in "addons-266389"
	I1216 04:11:43.938443  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.909794826s)
	I1216 04:11:43.938491  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.890713775s)
	I1216 04:11:43.938570  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.849681197s)
	I1216 04:11:43.938639  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.789668256s)
	I1216 04:11:43.938651  442720 addons.go:495] Verifying addon metrics-server=true in "addons-266389"
	I1216 04:11:43.938702  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.72747972s)
	I1216 04:11:43.938744  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.599146238s)
	I1216 04:11:43.938770  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.509757609s)
	I1216 04:11:43.938933  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.371462908s)
	I1216 04:11:43.938946  442720 addons.go:495] Verifying addon registry=true in "addons-266389"
	I1216 04:11:43.939099  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.756585651s)
	W1216 04:11:43.939125  442720 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:11:43.939139  442720 retry.go:31] will retry after 300.130251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:11:43.941420  442720 out.go:179] * Verifying ingress addon...
	I1216 04:11:43.943434  442720 out.go:179] * Verifying registry addon...
	I1216 04:11:43.943490  442720 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-266389 service yakd-dashboard -n yakd-dashboard
	
	I1216 04:11:43.947202  442720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 04:11:43.947997  442720 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 04:11:43.955378  442720 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:11:43.955403  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:43.955915  442720 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 04:11:43.955934  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:44.185502  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.386592453s)
	I1216 04:11:44.185587  442720 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-266389"
	I1216 04:11:44.189005  442720 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 04:11:44.191933  442720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 04:11:44.200370  442720 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:11:44.200399  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:44.240461  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:11:44.452570  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:44.452841  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:44.696072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:44.951297  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:44.951894  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:45.196218  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:45.285041  442720 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 04:11:45.285184  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:45.307855  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:45.414682  442720 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 04:11:45.428338  442720 addons.go:239] Setting addon gcp-auth=true in "addons-266389"
	I1216 04:11:45.428385  442720 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:11:45.428829  442720 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:11:45.447012  442720 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 04:11:45.447063  442720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:11:45.451607  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:45.455119  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:45.467407  442720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:11:45.695035  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:45.951001  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:45.952289  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:46.195382  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:46.267143  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:46.450630  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:46.451768  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:46.697135  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:46.947530  442720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.707021381s)
	I1216 04:11:46.947667  442720 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.500630917s)
	I1216 04:11:46.950923  442720 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:11:46.951469  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:46.951947  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:46.956641  442720 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 04:11:46.959452  442720 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 04:11:46.959520  442720 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 04:11:46.973347  442720 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 04:11:46.973412  442720 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 04:11:46.986433  442720 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:11:46.986456  442720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 04:11:47.000044  442720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:11:47.196025  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:47.455705  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:47.456056  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:47.537360  442720 addons.go:495] Verifying addon gcp-auth=true in "addons-266389"
	I1216 04:11:47.542492  442720 out.go:179] * Verifying gcp-auth addon...
	I1216 04:11:47.546221  442720 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 04:11:47.556267  442720 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 04:11:47.556335  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:47.695461  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:47.951111  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:47.951370  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:48.050355  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:48.195729  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:48.451265  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:48.452363  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:48.549518  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:48.695667  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:48.767954  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:48.950394  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:48.950964  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:49.050377  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:49.195354  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:49.450216  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:49.451710  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:49.549516  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:49.695686  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:49.951437  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:49.951587  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:50.050072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:50.194963  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:50.451311  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:50.451524  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:50.549470  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:50.696145  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:50.951513  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:50.951646  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:51.050022  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:51.195246  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:51.267100  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:51.451350  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:51.451492  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:51.558447  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:51.695502  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:51.950721  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:51.951449  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:52.049489  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:52.196120  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:52.451396  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:52.451830  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:52.549515  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:52.695760  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:52.951232  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:52.951415  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:53.049529  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:53.195471  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:53.267221  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:53.450150  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:53.451603  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:53.549273  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:53.695139  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:53.950640  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:53.952158  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:54.049174  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:54.194936  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:54.451623  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:54.452119  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:54.549956  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:54.695475  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:54.950026  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:54.951130  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:55.049347  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:55.195220  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:55.267585  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:55.450627  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:55.451976  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:55.549908  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:55.696016  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:55.950358  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:55.951829  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:56.049667  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:56.195342  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:56.450506  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:56.451114  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:56.549923  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:56.696164  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:56.951560  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:56.951999  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:57.049744  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:57.195601  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:57.451003  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:57.451390  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:57.549691  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:57.695638  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:11:57.768061  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:11:57.950344  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:57.951182  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:58.050333  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:58.195266  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:58.451342  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:58.451702  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:58.549743  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:58.695597  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:58.950220  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:58.951503  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:59.049277  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:59.195867  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:59.453322  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:59.453753  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:11:59.549795  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:11:59.696097  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:11:59.950917  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:11:59.951113  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:00.050673  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:00.210136  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:00.271659  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:00.454084  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:00.454370  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:00.550152  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:00.695242  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:00.951243  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:00.951306  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:01.049025  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:01.195106  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:01.449986  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:01.451411  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:01.549028  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:01.695333  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:01.951580  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:01.951776  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:02.049971  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:02.194858  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:02.452974  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:02.453212  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:02.550175  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:02.695677  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:02.767808  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:02.951358  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:02.951413  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:03.048970  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:03.194806  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:03.451660  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:03.451956  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:03.549885  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:03.695728  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:03.950918  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:03.951209  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:04.050048  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:04.195150  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:04.451587  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:04.451721  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:04.549600  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:04.696019  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:04.768600  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:04.950746  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:04.951404  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:05.049622  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:05.195614  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:05.450829  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:05.451073  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:05.549986  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:05.694990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:05.950677  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:05.951439  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:06.056875  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:06.195034  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:06.450737  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:06.451447  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:06.549288  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:06.696089  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:06.950487  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:06.951515  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:07.049611  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:07.195564  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:07.267473  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:07.450274  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:07.451616  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:07.549484  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:07.695833  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:07.951291  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:07.951765  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:08.049994  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:08.194990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:08.451158  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:08.451491  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:08.549115  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:08.695045  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:08.951968  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:08.952121  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:09.050102  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:09.194797  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:09.450757  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:09.450934  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:09.550003  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:09.695104  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:09.766748  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:09.950986  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:09.951319  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:10.049483  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:10.195383  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:10.451076  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:10.451416  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:10.549291  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:10.695765  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:10.951579  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:10.951689  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:11.049573  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:11.195858  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:11.451465  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:11.453056  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:11.551009  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:11.695260  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:11.766991  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:11.950149  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:11.950798  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:12.049627  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:12.195559  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:12.451303  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:12.451540  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:12.549266  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:12.695757  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:12.950357  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:12.951571  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:13.049796  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:13.196017  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:13.450296  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:13.451283  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:13.549423  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:13.696375  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:13.767110  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:13.950285  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:13.950950  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:14.049927  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:14.195774  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:14.450863  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:14.451103  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:14.548998  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:14.695196  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:14.950450  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:14.951070  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:15.050460  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:15.195594  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:15.451376  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:15.451500  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:15.549529  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:15.695388  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:15.768326  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:15.950863  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:15.951487  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:16.049681  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:16.196044  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:16.451315  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:16.451473  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:16.554319  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:16.695453  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:16.951695  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:16.951812  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:17.049695  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:17.195745  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:17.450974  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:17.451072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:17.549795  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:17.695109  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:17.952208  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:17.952350  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:18.049532  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:18.195506  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1216 04:12:18.267306  442720 node_ready.go:57] node "addons-266389" has "Ready":"False" status (will retry)
	I1216 04:12:18.451228  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:18.451353  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:18.549390  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:18.695196  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:18.950990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:18.951135  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:19.050001  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:19.213202  442720 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:12:19.213231  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:19.291087  442720 node_ready.go:49] node "addons-266389" is "Ready"
	I1216 04:12:19.291128  442720 node_ready.go:38] duration metric: took 39.527281242s for node "addons-266389" to be "Ready" ...
	I1216 04:12:19.291142  442720 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:12:19.291201  442720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:12:19.312218  442720 api_server.go:72] duration metric: took 41.935045837s to wait for apiserver process to appear ...
	I1216 04:12:19.312245  442720 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:12:19.312266  442720 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 04:12:19.326704  442720 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 04:12:19.328620  442720 api_server.go:141] control plane version: v1.34.2
	I1216 04:12:19.328651  442720 api_server.go:131] duration metric: took 16.39836ms to wait for apiserver health ...
	I1216 04:12:19.328661  442720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 04:12:19.339803  442720 system_pods.go:59] 19 kube-system pods found
	I1216 04:12:19.339841  442720 system_pods.go:61] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:19.339848  442720 system_pods.go:61] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending
	I1216 04:12:19.339855  442720 system_pods.go:61] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending
	I1216 04:12:19.339860  442720 system_pods.go:61] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending
	I1216 04:12:19.339864  442720 system_pods.go:61] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:19.339870  442720 system_pods.go:61] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:19.339875  442720 system_pods.go:61] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:19.339879  442720 system_pods.go:61] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:19.339882  442720 system_pods.go:61] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending
	I1216 04:12:19.339886  442720 system_pods.go:61] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:19.339890  442720 system_pods.go:61] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:19.339896  442720 system_pods.go:61] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending
	I1216 04:12:19.339900  442720 system_pods.go:61] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending
	I1216 04:12:19.339907  442720 system_pods.go:61] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending
	I1216 04:12:19.339911  442720 system_pods.go:61] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending
	I1216 04:12:19.339915  442720 system_pods.go:61] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending
	I1216 04:12:19.339935  442720 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending
	I1216 04:12:19.339939  442720 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending
	I1216 04:12:19.339942  442720 system_pods.go:61] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending
	I1216 04:12:19.339948  442720 system_pods.go:74] duration metric: took 11.281108ms to wait for pod list to return data ...
	I1216 04:12:19.339958  442720 default_sa.go:34] waiting for default service account to be created ...
	I1216 04:12:19.345302  442720 default_sa.go:45] found service account: "default"
	I1216 04:12:19.345332  442720 default_sa.go:55] duration metric: took 5.367009ms for default service account to be created ...
	I1216 04:12:19.345343  442720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 04:12:19.361587  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:19.361625  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:19.361633  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending
	I1216 04:12:19.361638  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending
	I1216 04:12:19.361642  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending
	I1216 04:12:19.361646  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:19.361651  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:19.361655  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:19.361660  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:19.361671  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending
	I1216 04:12:19.361675  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:19.361680  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:19.361687  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending
	I1216 04:12:19.361691  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending
	I1216 04:12:19.361695  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending
	I1216 04:12:19.361706  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending
	I1216 04:12:19.361710  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending
	I1216 04:12:19.361714  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending
	I1216 04:12:19.361719  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending
	I1216 04:12:19.361728  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending
	I1216 04:12:19.361743  442720 retry.go:31] will retry after 259.306788ms: missing components: kube-dns
	I1216 04:12:19.504587  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:19.505044  442720 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:12:19.505080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:19.603299  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:19.666117  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:19.666157  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:19.666168  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:19.666173  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending
	I1216 04:12:19.666180  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending
	I1216 04:12:19.666183  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:19.666188  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:19.666192  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:19.666197  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:19.666202  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending
	I1216 04:12:19.666211  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:19.666215  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:19.666219  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending
	I1216 04:12:19.666226  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending
	I1216 04:12:19.666232  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:19.666246  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:19.666250  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending
	I1216 04:12:19.666258  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:19.666267  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending
	I1216 04:12:19.666271  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending
	I1216 04:12:19.666285  442720 retry.go:31] will retry after 330.370387ms: missing components: kube-dns
	I1216 04:12:19.712049  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:19.958354  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:19.960328  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:20.008251  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:20.008303  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:20.008314  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:20.008322  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:12:20.008329  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:12:20.008335  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:20.008341  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:20.008346  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:20.008353  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:20.008362  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:12:20.008371  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:20.008377  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:20.008383  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:12:20.008393  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:12:20.008400  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:20.008413  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:20.008422  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:12:20.008431  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.008439  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.008445  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 04:12:20.008463  442720 retry.go:31] will retry after 417.545915ms: missing components: kube-dns
	I1216 04:12:20.059767  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:20.197096  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:20.433352  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:20.433389  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:12:20.433398  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:20.433406  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:12:20.433412  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:12:20.433429  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:20.433438  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:20.433452  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:20.433456  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:20.433471  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:12:20.433475  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:20.433480  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:20.433491  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:12:20.433498  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:12:20.433503  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:20.433509  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:20.433519  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:12:20.433528  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.433539  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:20.433543  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Running
	I1216 04:12:20.433563  442720 retry.go:31] will retry after 567.761058ms: missing components: kube-dns
	I1216 04:12:20.452468  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:20.455435  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:20.550099  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:20.695755  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:20.965526  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:20.982007  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:21.067474  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:21.069650  442720 system_pods.go:86] 19 kube-system pods found
	I1216 04:12:21.069681  442720 system_pods.go:89] "coredns-66bc5c9577-6mwzd" [c16a18bd-ba39-4f25-a294-00a94ce250e4] Running
	I1216 04:12:21.069697  442720 system_pods.go:89] "csi-hostpath-attacher-0" [f78f15de-bc62-4454-9ae6-cc935b31f2ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:12:21.069705  442720 system_pods.go:89] "csi-hostpath-resizer-0" [815edbdc-723a-496f-980d-0f2be07dfa85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:12:21.069715  442720 system_pods.go:89] "csi-hostpathplugin-4cntk" [76c9b687-92c4-4dd8-9c3f-47d3f175f3cf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:12:21.069719  442720 system_pods.go:89] "etcd-addons-266389" [14f4b7c2-0752-42e5-9e79-981f20dd1782] Running
	I1216 04:12:21.069724  442720 system_pods.go:89] "kindnet-b74jx" [e99635cf-92b4-4bb2-a224-c4939328d20a] Running
	I1216 04:12:21.069728  442720 system_pods.go:89] "kube-apiserver-addons-266389" [a7361d5c-f618-4273-b397-bd875595376e] Running
	I1216 04:12:21.069733  442720 system_pods.go:89] "kube-controller-manager-addons-266389" [783042cd-55a0-424b-bf44-79d93a1b5e3b] Running
	I1216 04:12:21.069740  442720 system_pods.go:89] "kube-ingress-dns-minikube" [8618db1f-f07b-4e30-bd8e-8a48edda137c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:12:21.069752  442720 system_pods.go:89] "kube-proxy-qjxqh" [e7b2b584-4520-421b-a5d7-616cfd0ed768] Running
	I1216 04:12:21.069766  442720 system_pods.go:89] "kube-scheduler-addons-266389" [2eed3540-33e9-48be-9902-9fd61b7665ab] Running
	I1216 04:12:21.069780  442720 system_pods.go:89] "metrics-server-85b7d694d7-5q887" [c959d53c-194d-408b-97ad-560ef2cd4be0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:12:21.069787  442720 system_pods.go:89] "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:12:21.069799  442720 system_pods.go:89] "registry-6b586f9694-6fhfq" [edfd3d1c-a046-4ed9-9140-f60d6d884765] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:12:21.069806  442720 system_pods.go:89] "registry-creds-764b6fb674-7cfhx" [d035c106-cbd0-4064-b23f-d8d1762768a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:12:21.069817  442720 system_pods.go:89] "registry-proxy-k95mm" [f9095f83-10c4-46e8-bdd0-eb4566408ed6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:12:21.069823  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ppgw" [a5cde31c-ffe9-4f0d-ae9d-56e86381ea36] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:21.069836  442720 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t752l" [0ed5b61e-f66c-4307-907a-a6a97c6c0982] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:12:21.069840  442720 system_pods.go:89] "storage-provisioner" [8a216864-7b03-4f90-8324-34cf51f444a6] Running
	I1216 04:12:21.069849  442720 system_pods.go:126] duration metric: took 1.724500102s to wait for k8s-apps to be running ...
	I1216 04:12:21.069860  442720 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 04:12:21.069916  442720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:12:21.088629  442720 system_svc.go:56] duration metric: took 18.759227ms WaitForService to wait for kubelet
	I1216 04:12:21.088658  442720 kubeadm.go:587] duration metric: took 43.711489975s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:12:21.088676  442720 node_conditions.go:102] verifying NodePressure condition ...
	I1216 04:12:21.091946  442720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 04:12:21.091978  442720 node_conditions.go:123] node cpu capacity is 2
	I1216 04:12:21.091994  442720 node_conditions.go:105] duration metric: took 3.31277ms to run NodePressure ...
	I1216 04:12:21.092008  442720 start.go:242] waiting for startup goroutines ...
	I1216 04:12:21.196331  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:21.451780  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:21.452525  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:21.549636  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:21.696197  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:21.951359  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:21.951799  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:22.049981  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:22.195301  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:22.452046  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:22.452671  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:22.549571  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:22.695948  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:22.954407  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:22.954681  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:23.050165  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:23.195412  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:23.452527  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:23.453112  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:23.549959  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:23.696314  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:23.954405  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:23.954772  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:24.050080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:24.195970  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:24.452774  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:24.452934  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:24.549638  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:24.695791  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:24.953700  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:24.953821  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:25.049931  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:25.196023  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:25.452095  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:25.452716  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:25.549752  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:25.696509  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:25.952028  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:25.952186  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:26.056080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:26.195772  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:26.452022  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:26.452185  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:26.549263  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:26.696589  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:26.952501  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:26.952972  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:27.050590  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:27.196319  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:27.451532  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:27.451742  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:27.549539  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:27.696313  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:27.952214  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:27.953484  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:28.053505  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:28.196212  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:28.453089  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:28.453511  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:28.549523  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:28.696764  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:28.952519  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:28.952777  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:29.050457  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:29.196105  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:29.452540  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:29.452754  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:29.550072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:29.696323  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:29.962999  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:29.963381  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:30.062033  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:30.195972  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:30.452615  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:30.453119  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:30.550205  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:30.696711  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:30.953288  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:30.953812  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:31.049331  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:31.195329  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:31.452841  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:31.453172  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:31.550378  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:31.696424  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:31.956398  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:31.956865  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:32.050202  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:32.196394  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:32.452210  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:32.452569  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:32.549593  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:32.696194  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:32.952953  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:32.953402  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:33.049838  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:33.196374  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:33.453020  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:33.453735  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:33.550209  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:33.695889  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:33.952932  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:33.953561  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:34.050502  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:34.195539  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:34.452146  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:34.452621  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:34.550272  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:34.696150  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:34.952754  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:34.952962  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:35.050480  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:35.204544  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:35.452017  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:35.453239  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:35.549338  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:35.696237  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:35.955369  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:35.957180  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:36.050300  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:36.196450  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:36.451468  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:36.452618  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:36.551258  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:36.695699  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:36.954606  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:36.955047  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:37.066404  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:37.204875  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:37.454535  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:37.455098  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:37.554857  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:37.696397  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:37.952434  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:37.952834  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:38.069178  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:38.195590  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:38.451714  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:38.451912  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:38.549761  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:38.696421  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:38.953194  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:38.953325  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:39.051329  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:39.195876  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:39.453525  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:39.453758  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:39.549928  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:39.695348  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:39.951139  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:39.951326  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:40.055122  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:40.202978  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:40.454335  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:40.454454  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:40.550470  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:40.703482  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:40.952698  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:40.953002  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:41.050552  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:41.207624  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:41.454664  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:41.455148  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:41.550421  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:41.696513  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:41.951422  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:41.952935  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:42.050665  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:42.196651  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:42.452704  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:42.454331  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:42.550179  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:42.695870  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:42.952197  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:42.952369  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:43.050331  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:43.197003  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:43.453158  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:43.454610  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:43.549861  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:43.695990  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:43.951333  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:43.952380  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:44.051033  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:44.200058  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:44.452814  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:44.453679  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:44.549961  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:44.696080  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:44.955272  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:44.955414  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:45.062864  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:45.197531  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:45.450958  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:45.451501  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:45.549692  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:45.696214  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:45.953082  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:45.953385  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:46.050169  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:46.196573  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:46.452422  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:46.452781  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:46.550075  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:46.695617  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:46.951594  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:46.951749  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:47.049669  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:47.196762  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:47.453712  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:47.454113  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:47.550620  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:47.696398  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:47.951540  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:47.951677  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:48.050167  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:48.195921  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:48.451012  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:48.451801  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:48.549685  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:48.696369  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:48.951796  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:48.953332  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:49.049367  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:49.195875  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:49.452525  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:49.453500  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:49.549668  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:49.696683  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:49.951935  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:49.952619  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:50.050294  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:50.196721  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:50.451360  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:50.451517  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:50.552306  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:50.696987  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:50.953461  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:50.953962  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:51.052450  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:51.195919  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:51.455857  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:51.456317  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:51.550030  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:51.698670  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:51.955867  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:51.957163  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:52.049667  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:52.197918  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:52.460124  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:52.460752  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:52.551430  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:52.696128  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:52.954185  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:52.954758  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:53.050083  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:53.196459  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:53.453974  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:53.454268  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:53.548856  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:53.695607  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:53.952120  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:53.952260  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:54.049349  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:54.196199  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:54.450576  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:54.452772  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:54.550225  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:54.695932  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:54.950637  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:54.952934  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:55.049945  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:55.194999  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:55.451075  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:55.451163  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:55.550235  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:55.695550  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:55.951728  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:55.951873  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:56.049676  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:56.204297  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:56.451598  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:56.451978  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:56.550258  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:56.695539  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:56.952761  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:56.953326  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:57.049597  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:57.195480  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:57.453022  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:57.453219  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:57.550240  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:57.695903  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:57.951655  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:57.951750  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:58.050254  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:58.195784  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:58.452984  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:58.465641  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:58.549738  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:58.696666  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:58.953134  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:58.953588  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:12:59.049730  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:59.197838  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:59.453369  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:12:59.454403  442720 kapi.go:107] duration metric: took 1m15.507202389s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 04:12:59.550016  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:12:59.699171  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:12:59.951318  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:00.083804  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:00.200403  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:00.455827  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:00.551935  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:00.713277  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:00.951622  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:01.052498  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:01.196007  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:01.453397  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:01.560692  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:01.695782  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:01.951010  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:02.050008  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:02.196885  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:02.452092  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:02.550320  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:02.695966  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:02.951782  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:03.049770  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:03.197175  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:03.451869  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:03.549851  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:03.695514  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:03.952165  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:04.049206  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:04.195835  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:04.452526  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:04.551826  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:04.696027  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:04.951523  442720 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:13:05.049710  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:05.197620  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:05.452313  442720 kapi.go:107] duration metric: took 1m21.504311633s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 04:13:05.549482  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:05.695848  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:06.049870  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:06.195165  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:06.550226  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:06.751530  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:07.049784  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:07.198133  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:07.552509  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:07.697373  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:08.049983  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:08.195777  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:08.549953  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:08.695133  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:09.050362  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:09.196072  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:09.558969  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:09.696165  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:10.055881  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:10.197196  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:10.549554  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:10.696633  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:11.052078  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:11.195551  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:11.550301  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:11.696356  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:12.049804  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:12.196382  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:12.549635  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:12.695843  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:13.050455  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:13:13.197497  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:13.550720  442720 kapi.go:107] duration metric: took 1m26.004500081s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 04:13:13.553722  442720 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-266389 cluster.
	I1216 04:13:13.556524  442720 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 04:13:13.559370  442720 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 04:13:13.696974  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:14.196571  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:14.695393  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:15.196887  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:15.695636  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:16.196306  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:16.696765  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:17.195793  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:17.695639  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:18.196062  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:18.697181  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:19.195530  442720 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:13:19.696431  442720 kapi.go:107] duration metric: took 1m35.50449885s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 04:13:19.699674  442720 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, registry-creds, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1216 04:13:19.702616  442720 addons.go:530] duration metric: took 1m42.325021089s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner-rancher inspektor-gadget ingress-dns storage-provisioner metrics-server registry-creds yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1216 04:13:19.702678  442720 start.go:247] waiting for cluster config update ...
	I1216 04:13:19.702717  442720 start.go:256] writing updated cluster config ...
	I1216 04:13:19.703056  442720 ssh_runner.go:195] Run: rm -f paused
	I1216 04:13:19.708771  442720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:13:19.712568  442720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6mwzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.719163  442720 pod_ready.go:94] pod "coredns-66bc5c9577-6mwzd" is "Ready"
	I1216 04:13:19.719194  442720 pod_ready.go:86] duration metric: took 6.591998ms for pod "coredns-66bc5c9577-6mwzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.721614  442720 pod_ready.go:83] waiting for pod "etcd-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.726525  442720 pod_ready.go:94] pod "etcd-addons-266389" is "Ready"
	I1216 04:13:19.726555  442720 pod_ready.go:86] duration metric: took 4.913779ms for pod "etcd-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.729000  442720 pod_ready.go:83] waiting for pod "kube-apiserver-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.734150  442720 pod_ready.go:94] pod "kube-apiserver-addons-266389" is "Ready"
	I1216 04:13:19.734180  442720 pod_ready.go:86] duration metric: took 5.153748ms for pod "kube-apiserver-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:19.736976  442720 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.113693  442720 pod_ready.go:94] pod "kube-controller-manager-addons-266389" is "Ready"
	I1216 04:13:20.113744  442720 pod_ready.go:86] duration metric: took 376.73701ms for pod "kube-controller-manager-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.316333  442720 pod_ready.go:83] waiting for pod "kube-proxy-qjxqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.714266  442720 pod_ready.go:94] pod "kube-proxy-qjxqh" is "Ready"
	I1216 04:13:20.714307  442720 pod_ready.go:86] duration metric: took 397.947561ms for pod "kube-proxy-qjxqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:20.913995  442720 pod_ready.go:83] waiting for pod "kube-scheduler-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:21.312759  442720 pod_ready.go:94] pod "kube-scheduler-addons-266389" is "Ready"
	I1216 04:13:21.312786  442720 pod_ready.go:86] duration metric: took 398.765416ms for pod "kube-scheduler-addons-266389" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:13:21.312799  442720 pod_ready.go:40] duration metric: took 1.603995293s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:13:21.372470  442720 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 04:13:21.375862  442720 out.go:179] * Done! kubectl is now configured to use "addons-266389" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 04:13:51 addons-266389 crio[828]: time="2025-12-16T04:13:51.087731854Z" level=info msg="Started container" PID=5244 containerID=8631211e3b1690b318fe5490c0ccb4633dcb31000183ade080484bd592415275 description=default/test-local-path/busybox id=cee46daa-0fd5-41fc-8993-5e8bf42fad02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cb9ed1489bd9057bd5b04b19427b3e43ec30ac4616c1f5984206053146d41e0
	Dec 16 04:13:52 addons-266389 crio[828]: time="2025-12-16T04:13:52.499890657Z" level=info msg="Stopping pod sandbox: 1cb9ed1489bd9057bd5b04b19427b3e43ec30ac4616c1f5984206053146d41e0" id=c5ff882c-4ee2-4007-aab5-e232f79bb660 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 04:13:52 addons-266389 crio[828]: time="2025-12-16T04:13:52.500177027Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:1cb9ed1489bd9057bd5b04b19427b3e43ec30ac4616c1f5984206053146d41e0 UID:f53fd2b2-f1ed-4205-8a69-a4c86e63d984 NetNS:/var/run/netns/5b009ea4-248b-4916-ba4a-165813995a46 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cb40}] Aliases:map[]}"
	Dec 16 04:13:52 addons-266389 crio[828]: time="2025-12-16T04:13:52.50032207Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 16 04:13:52 addons-266389 crio[828]: time="2025-12-16T04:13:52.538722227Z" level=info msg="Stopped pod sandbox: 1cb9ed1489bd9057bd5b04b19427b3e43ec30ac4616c1f5984206053146d41e0" id=c5ff882c-4ee2-4007-aab5-e232f79bb660 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 04:13:53 addons-266389 crio[828]: time="2025-12-16T04:13:53.974690861Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879/POD" id=078d6b84-a6ee-41e3-af66-9e389da2380a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:13:53 addons-266389 crio[828]: time="2025-12-16T04:13:53.974755371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:53.991631628Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879 Namespace:local-path-storage ID:9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6 UID:e23ef18b-cb6d-44ad-bbea-417158dfa7c6 NetNS:/var/run/netns/721ad134-e6fe-4372-abf3-932ce5f4ccfc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000ac4ee8}] Aliases:map[]}"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:53.991673515Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879 to CNI network \"kindnet\" (type=ptp)"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.00174631Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879 Namespace:local-path-storage ID:9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6 UID:e23ef18b-cb6d-44ad-bbea-417158dfa7c6 NetNS:/var/run/netns/721ad134-e6fe-4372-abf3-932ce5f4ccfc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000ac4ee8}] Aliases:map[]}"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.002071679Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879 for CNI network kindnet (type=ptp)"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.018511313Z" level=info msg="Ran pod sandbox 9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6 with infra container: local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879/POD" id=078d6b84-a6ee-41e3-af66-9e389da2380a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.020030351Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=342cc785-a8c1-4e4a-9f9b-2482bf72180b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.028859532Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=ab2c5baf-d854-4caa-b78d-f6065e91ca1d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.037970216Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879/helper-pod" id=1a8ba56f-eba7-47c3-8973-5b4b00fa81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.038269025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.051524866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.052089467Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.071977691Z" level=info msg="Created container d684f205c5962d10387485a515e678209ec482c9fd19cf0c817a5b82c577312b: local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879/helper-pod" id=1a8ba56f-eba7-47c3-8973-5b4b00fa81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.073356978Z" level=info msg="Starting container: d684f205c5962d10387485a515e678209ec482c9fd19cf0c817a5b82c577312b" id=5c6c0d5d-64e2-46e8-80b2-d812a30d37bc name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 04:13:54 addons-266389 crio[828]: time="2025-12-16T04:13:54.07849631Z" level=info msg="Started container" PID=5332 containerID=d684f205c5962d10387485a515e678209ec482c9fd19cf0c817a5b82c577312b description=local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879/helper-pod id=5c6c0d5d-64e2-46e8-80b2-d812a30d37bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6
	Dec 16 04:13:55 addons-266389 crio[828]: time="2025-12-16T04:13:55.513966981Z" level=info msg="Stopping pod sandbox: 9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6" id=eb820711-0e2a-446e-9df0-0f9a6d731690 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 04:13:55 addons-266389 crio[828]: time="2025-12-16T04:13:55.514260727Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879 Namespace:local-path-storage ID:9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6 UID:e23ef18b-cb6d-44ad-bbea-417158dfa7c6 NetNS:/var/run/netns/721ad134-e6fe-4372-abf3-932ce5f4ccfc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078240}] Aliases:map[]}"
	Dec 16 04:13:55 addons-266389 crio[828]: time="2025-12-16T04:13:55.514428622Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879 from CNI network \"kindnet\" (type=ptp)"
	Dec 16 04:13:55 addons-266389 crio[828]: time="2025-12-16T04:13:55.535542763Z" level=info msg="Stopped pod sandbox: 9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6" id=eb820711-0e2a-446e-9df0-0f9a6d731690 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	d684f205c5962       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   9583c8883c4a9       helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879   local-path-storage
	8631211e3b169       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   1cb9ed1489bd9       test-local-path                                              default
	c56148f6dcf73       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   a01fe0430f97f       helper-pod-create-pvc-12852da6-9e8a-4765-8a93-15cde56a9879   local-path-storage
	106a996d5d6db       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          31 seconds ago       Running             busybox                                  0                   44b2c79c4e368       busybox                                                      default
	12223ad132387       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          37 seconds ago       Running             csi-snapshotter                          0                   b91e84b66173f       csi-hostpathplugin-4cntk                                     kube-system
	0b4f3c5e893d7       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          38 seconds ago       Running             csi-provisioner                          0                   b91e84b66173f       csi-hostpathplugin-4cntk                                     kube-system
	c9070f308fd86       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            40 seconds ago       Running             liveness-probe                           0                   b91e84b66173f       csi-hostpathplugin-4cntk                                     kube-system
	48496242e59c5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           41 seconds ago       Running             hostpath                                 0                   b91e84b66173f       csi-hostpathplugin-4cntk                                     kube-system
	a7474c7cb9f49       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 42 seconds ago       Running             gcp-auth                                 0                   127900e5f166a       gcp-auth-78565c9fb4-lzbjd                                    gcp-auth
	a222cf8717975       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                45 seconds ago       Running             node-driver-registrar                    0                   b91e84b66173f       csi-hostpathplugin-4cntk                                     kube-system
	c7eedac774bd3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            46 seconds ago       Running             gadget                                   0                   6c01c39b9f4d7       gadget-w7z9q                                                 gadget
	4c5dffefd81cd       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             50 seconds ago       Running             controller                               0                   9d9eb964ff234       ingress-nginx-controller-85d4c799dd-hbrzj                    ingress-nginx
	52a17616824e6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              57 seconds ago       Running             registry-proxy                           0                   57f19482f7475       registry-proxy-k95mm                                         kube-system
	3efc9d422c0c3       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   e6f26d96a71d3       nvidia-device-plugin-daemonset-pj9b6                         kube-system
	6e3be5772ff86       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   5fb5baa3e63aa       metrics-server-85b7d694d7-5q887                              kube-system
	6e142dfc84916       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   b91e84b66173f       csi-hostpathplugin-4cntk                                     kube-system
	4da4c59550ee3       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   a0fc26bc203c1       csi-hostpath-resizer-0                                       kube-system
	66770881f17c9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   ac15221f20c5d       snapshot-controller-7d9fbc56b8-t752l                         kube-system
	9939f0868fb7f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   325089d857753       local-path-provisioner-648f6765c9-wpj9t                      local-path-storage
	84135c3563dc8       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   0d01c988792ec       registry-6b586f9694-6fhfq                                    kube-system
	179d32a34b981       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   76c447629ef9d       cloud-spanner-emulator-5bdddb765-z56bg                       default
	c3264da7d66b8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    0                   2f2576a92cab9       ingress-nginx-admission-patch-8m974                          ingress-nginx
	698b79e9ff28b       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   795d45c083f00       kube-ingress-dns-minikube                                    kube-system
	63eba54ed2b9b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   7265206ba3b3c       csi-hostpath-attacher-0                                      kube-system
	8b24d28c9cf9a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   cadcf9d984087       snapshot-controller-7d9fbc56b8-4ppgw                         kube-system
	c56201a9b5ad7       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   89b8fe93f4571       yakd-dashboard-5ff678cb9-vt9kv                               yakd-dashboard
	51949d99c72d1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   e79ac9856339a       ingress-nginx-admission-create-n7d4f                         ingress-nginx
	b3d0766b0e4db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   e954c6def2f36       coredns-66bc5c9577-6mwzd                                     kube-system
	198a5f79252ec       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   96e8b2944b892       storage-provisioner                                          kube-system
	71f0cfb9d9516       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   2731a7b865e7a       kube-proxy-qjxqh                                             kube-system
	cb4b75c762835       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   08a103110ce8d       kindnet-b74jx                                                kube-system
	9e53dfcedc5ae       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   2dfdf6c9f85dd       kube-controller-manager-addons-266389                        kube-system
	4f4977c8f895c       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   1af2faf775e7b       kube-scheduler-addons-266389                                 kube-system
	6fd0cf07fb532       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   082537ad4aec4       kube-apiserver-addons-266389                                 kube-system
	d27466cb0ef32       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   4e9cbe27e2bb7       etcd-addons-266389                                           kube-system
	
	
	==> coredns [b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7] <==
	[INFO] 10.244.0.17:43091 - 54870 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002393074s
	[INFO] 10.244.0.17:43091 - 57104 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000147447s
	[INFO] 10.244.0.17:43091 - 23109 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000250464s
	[INFO] 10.244.0.17:49326 - 18940 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154808s
	[INFO] 10.244.0.17:49326 - 18495 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000276081s
	[INFO] 10.244.0.17:43249 - 63453 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114504s
	[INFO] 10.244.0.17:43249 - 62984 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163432s
	[INFO] 10.244.0.17:60253 - 28489 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111189s
	[INFO] 10.244.0.17:60253 - 28275 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000142828s
	[INFO] 10.244.0.17:58532 - 29703 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006377136s
	[INFO] 10.244.0.17:58532 - 30181 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006519439s
	[INFO] 10.244.0.17:57867 - 35577 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014355s
	[INFO] 10.244.0.17:57867 - 35757 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321291s
	[INFO] 10.244.0.21:33194 - 59766 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000169675s
	[INFO] 10.244.0.21:42409 - 61510 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000136543s
	[INFO] 10.244.0.21:33764 - 50073 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000178906s
	[INFO] 10.244.0.21:54747 - 44674 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143329s
	[INFO] 10.244.0.21:33825 - 466 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134434s
	[INFO] 10.244.0.21:43453 - 61701 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087156s
	[INFO] 10.244.0.21:33515 - 22580 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002190742s
	[INFO] 10.244.0.21:48036 - 14052 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001781416s
	[INFO] 10.244.0.21:39909 - 51309 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004799882s
	[INFO] 10.244.0.21:40339 - 64010 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001750729s
	[INFO] 10.244.0.23:33740 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000262583s
	[INFO] 10.244.0.23:48097 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102672s
	
	
	==> describe nodes <==
	Name:               addons-266389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-266389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=addons-266389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T04_11_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-266389
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-266389"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 04:11:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-266389
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 04:13:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 04:13:43 +0000   Tue, 16 Dec 2025 04:11:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 04:13:43 +0000   Tue, 16 Dec 2025 04:11:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 04:13:43 +0000   Tue, 16 Dec 2025 04:11:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 04:13:43 +0000   Tue, 16 Dec 2025 04:12:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-266389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b01d95696b577408f2b2782693c8bc0
	  System UUID:                ca615f09-a740-47f8-928c-e2f0056267cb
	  Boot ID:                    e72ece1f-d416-4c20-8564-468e8b5f7888
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-5bdddb765-z56bg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  gadget                      gadget-w7z9q                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  gcp-auth                    gcp-auth-78565c9fb4-lzbjd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-hbrzj    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m13s
	  kube-system                 coredns-66bc5c9577-6mwzd                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 csi-hostpathplugin-4cntk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 etcd-addons-266389                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-b74jx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-addons-266389                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-addons-266389        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-proxy-qjxqh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-addons-266389                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 metrics-server-85b7d694d7-5q887              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m14s
	  kube-system                 nvidia-device-plugin-daemonset-pj9b6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 registry-6b586f9694-6fhfq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 registry-creds-764b6fb674-7cfhx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 registry-proxy-k95mm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 snapshot-controller-7d9fbc56b8-4ppgw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-t752l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  local-path-storage          local-path-provisioner-648f6765c9-wpj9t      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vt9kv               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node addons-266389 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node addons-266389 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node addons-266389 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node addons-266389 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node addons-266389 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node addons-266389 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m21s                  node-controller  Node addons-266389 event: Registered Node addons-266389 in Controller
	  Normal   NodeReady                97s                    kubelet          Node addons-266389 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 01:17] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014643] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.519830] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df] <==
	{"level":"warn","ts":"2025-12-16T04:11:27.757471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.781869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.813233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.836099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.857648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.877665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.894069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.909813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.933368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.945483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.966585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:27.985034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.014663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.032488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.069614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.104797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.119788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.143217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:28.212373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:44.528588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:11:44.546775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:05.975105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:05.989773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:06.026447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T04:12:06.040960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37658","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a7474c7cb9f49060f42bfcb5204a0c64c8c19f1d15cd53cd9d307abfe50b208c] <==
	2025/12/16 04:13:13 GCP Auth Webhook started!
	2025/12/16 04:13:21 Ready to marshal response ...
	2025/12/16 04:13:21 Ready to write response ...
	2025/12/16 04:13:22 Ready to marshal response ...
	2025/12/16 04:13:22 Ready to write response ...
	2025/12/16 04:13:22 Ready to marshal response ...
	2025/12/16 04:13:22 Ready to write response ...
	2025/12/16 04:13:43 Ready to marshal response ...
	2025/12/16 04:13:43 Ready to write response ...
	2025/12/16 04:13:45 Ready to marshal response ...
	2025/12/16 04:13:45 Ready to write response ...
	2025/12/16 04:13:46 Ready to marshal response ...
	2025/12/16 04:13:46 Ready to write response ...
	2025/12/16 04:13:53 Ready to marshal response ...
	2025/12/16 04:13:53 Ready to write response ...
	2025/12/16 04:13:55 Ready to marshal response ...
	2025/12/16 04:13:55 Ready to write response ...
	
	
	==> kernel <==
	 04:13:56 up  2:56,  0 user,  load average: 3.72, 2.50, 1.90
	Linux addons-266389 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e] <==
	E1216 04:12:08.720053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 04:12:08.721455       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1216 04:12:10.119681       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 04:12:10.119712       1 metrics.go:72] Registering metrics
	I1216 04:12:10.119791       1 controller.go:711] "Syncing nftables rules"
	I1216 04:12:18.725184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:12:18.725240       1 main.go:301] handling current node
	I1216 04:12:28.719011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:12:28.719067       1 main.go:301] handling current node
	I1216 04:12:38.718431       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:12:38.718496       1 main.go:301] handling current node
	I1216 04:12:48.719186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:12:48.719270       1 main.go:301] handling current node
	I1216 04:12:58.718564       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:12:58.718697       1 main.go:301] handling current node
	I1216 04:13:08.721181       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:13:08.721261       1 main.go:301] handling current node
	I1216 04:13:18.718977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:13:18.719098       1 main.go:301] handling current node
	I1216 04:13:28.719202       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:13:28.719261       1 main.go:301] handling current node
	I1216 04:13:38.725184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:13:38.725297       1 main.go:301] handling current node
	I1216 04:13:48.718801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 04:13:48.718832       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628] <==
	E1216 04:12:19.161931       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.21.2:443: connect: connection refused" logger="UnhandledError"
	W1216 04:12:19.163270       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.21.2:443: connect: connection refused
	E1216 04:12:19.163504       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.21.2:443: connect: connection refused" logger="UnhandledError"
	W1216 04:12:19.250396       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.21.2:443: connect: connection refused
	E1216 04:12:19.252023       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.21.2:443: connect: connection refused" logger="UnhandledError"
	W1216 04:12:43.008850       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:12:43.008909       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 04:12:43.008924       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 04:12:43.009883       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:12:43.009965       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 04:12:43.009980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 04:13:03.228617       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 04:13:03.228729       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1216 04:13:03.229773       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.150.152:443: connect: connection refused" logger="UnhandledError"
	E1216 04:13:03.234514       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.150.152:443: connect: connection refused" logger="UnhandledError"
	E1216 04:13:03.238557       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.150.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.150.152:443: connect: connection refused" logger="UnhandledError"
	I1216 04:13:03.378893       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 04:13:32.355652       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34174: use of closed network connection
	E1216 04:13:32.613738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34216: use of closed network connection
	
	
	==> kube-controller-manager [9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3] <==
	I1216 04:11:35.991824       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 04:11:35.992020       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 04:11:35.992409       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 04:11:35.992447       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 04:11:35.992467       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 04:11:35.997756       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 04:11:35.998558       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:11:35.998593       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 04:11:35.998611       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 04:11:35.998643       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 04:11:35.998660       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 04:11:35.998664       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 04:11:35.998669       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 04:11:36.013482       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-266389" podCIDRs=["10.244.0.0/24"]
	E1216 04:11:42.203244       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1216 04:12:05.967585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 04:12:05.967735       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1216 04:12:05.967791       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 04:12:06.013704       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 04:12:06.018767       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 04:12:06.068876       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:12:06.119503       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 04:12:20.947256       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1216 04:12:36.074560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 04:12:36.128227       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e] <==
	I1216 04:11:38.448784       1 server_linux.go:53] "Using iptables proxy"
	I1216 04:11:38.562763       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 04:11:38.663730       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 04:11:38.663767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1216 04:11:38.663836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 04:11:38.920269       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 04:11:38.920322       1 server_linux.go:132] "Using iptables Proxier"
	I1216 04:11:38.927163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 04:11:38.927460       1 server.go:527] "Version info" version="v1.34.2"
	I1216 04:11:38.927480       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 04:11:38.929923       1 config.go:200] "Starting service config controller"
	I1216 04:11:38.929945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 04:11:38.929965       1 config.go:106] "Starting endpoint slice config controller"
	I1216 04:11:38.929969       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 04:11:38.929982       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 04:11:38.929986       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 04:11:38.930591       1 config.go:309] "Starting node config controller"
	I1216 04:11:38.930600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 04:11:38.930606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 04:11:39.030023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 04:11:39.030107       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 04:11:39.030120       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d] <==
	E1216 04:11:29.051441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 04:11:29.051480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:11:29.051530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 04:11:29.051567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 04:11:29.051599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:11:29.051690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 04:11:29.051731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:11:29.051762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:11:29.051792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:11:29.052157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:11:29.052206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:11:29.057133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1216 04:11:29.057242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:11:29.864925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 04:11:29.992778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:11:30.023493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:11:30.033019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:11:30.102190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:11:30.148234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:11:30.203407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:11:30.215409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:11:30.241794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 04:11:30.251481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:11:30.451515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1216 04:11:33.630810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 04:13:53 addons-266389 kubelet[1273]: I1216 04:13:53.607482    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53fd2b2-f1ed-4205-8a69-a4c86e63d984" path="/var/lib/kubelet/pods/f53fd2b2-f1ed-4205-8a69-a4c86e63d984/volumes"
	Dec 16 04:13:53 addons-266389 kubelet[1273]: I1216 04:13:53.777094    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwvp\" (UniqueName: \"kubernetes.io/projected/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-kube-api-access-spwvp\") pod \"helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") " pod="local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879"
	Dec 16 04:13:53 addons-266389 kubelet[1273]: I1216 04:13:53.777160    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-script\") pod \"helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") " pod="local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879"
	Dec 16 04:13:53 addons-266389 kubelet[1273]: I1216 04:13:53.777203    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-data\") pod \"helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") " pod="local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879"
	Dec 16 04:13:53 addons-266389 kubelet[1273]: I1216 04:13:53.777318    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-gcp-creds\") pod \"helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") " pod="local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879"
	Dec 16 04:13:54 addons-266389 kubelet[1273]: W1216 04:13:54.008672    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/crio-9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6 WatchSource:0}: Error finding container 9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6: Status 404 returned error can't find the container with id 9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.592671    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-gcp-creds\") pod \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") "
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.592724    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spwvp\" (UniqueName: \"kubernetes.io/projected/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-kube-api-access-spwvp\") pod \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") "
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.592775    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-script\") pod \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") "
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.592792    1273 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-data\") pod \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\" (UID: \"e23ef18b-cb6d-44ad-bbea-417158dfa7c6\") "
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.592948    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-data" (OuterVolumeSpecName: "data") pod "e23ef18b-cb6d-44ad-bbea-417158dfa7c6" (UID: "e23ef18b-cb6d-44ad-bbea-417158dfa7c6"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.592978    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e23ef18b-cb6d-44ad-bbea-417158dfa7c6" (UID: "e23ef18b-cb6d-44ad-bbea-417158dfa7c6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.594558    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-script" (OuterVolumeSpecName: "script") pod "e23ef18b-cb6d-44ad-bbea-417158dfa7c6" (UID: "e23ef18b-cb6d-44ad-bbea-417158dfa7c6"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.600849    1273 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-kube-api-access-spwvp" (OuterVolumeSpecName: "kube-api-access-spwvp") pod "e23ef18b-cb6d-44ad-bbea-417158dfa7c6" (UID: "e23ef18b-cb6d-44ad-bbea-417158dfa7c6"). InnerVolumeSpecName "kube-api-access-spwvp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.693694    1273 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-gcp-creds\") on node \"addons-266389\" DevicePath \"\""
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.693749    1273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-spwvp\" (UniqueName: \"kubernetes.io/projected/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-kube-api-access-spwvp\") on node \"addons-266389\" DevicePath \"\""
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.693763    1273 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-script\") on node \"addons-266389\" DevicePath \"\""
	Dec 16 04:13:55 addons-266389 kubelet[1273]: I1216 04:13:55.693774    1273 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/e23ef18b-cb6d-44ad-bbea-417158dfa7c6-data\") on node \"addons-266389\" DevicePath \"\""
	Dec 16 04:13:56 addons-266389 kubelet[1273]: I1216 04:13:56.103239    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bc56fcf7-5127-4597-973c-489e1d96f3d1-gcp-creds\") pod \"task-pv-pod\" (UID: \"bc56fcf7-5127-4597-973c-489e1d96f3d1\") " pod="default/task-pv-pod"
	Dec 16 04:13:56 addons-266389 kubelet[1273]: I1216 04:13:56.103396    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd247\" (UniqueName: \"kubernetes.io/projected/bc56fcf7-5127-4597-973c-489e1d96f3d1-kube-api-access-xd247\") pod \"task-pv-pod\" (UID: \"bc56fcf7-5127-4597-973c-489e1d96f3d1\") " pod="default/task-pv-pod"
	Dec 16 04:13:56 addons-266389 kubelet[1273]: I1216 04:13:56.103449    1273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56e82f29-b0f5-47bf-bb93-456c873be8d3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a2deb4c6-da35-11f0-92ab-e692561302c8\") pod \"task-pv-pod\" (UID: \"bc56fcf7-5127-4597-973c-489e1d96f3d1\") " pod="default/task-pv-pod"
	Dec 16 04:13:56 addons-266389 kubelet[1273]: I1216 04:13:56.221702    1273 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-56e82f29-b0f5-47bf-bb93-456c873be8d3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a2deb4c6-da35-11f0-92ab-e692561302c8\") pod \"task-pv-pod\" (UID: \"bc56fcf7-5127-4597-973c-489e1d96f3d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/4c00deb82e025a77cf0b06d2565ddd287127492bc9b7caf2d791a28230adc3d6/globalmount\"" pod="default/task-pv-pod"
	Dec 16 04:13:56 addons-266389 kubelet[1273]: W1216 04:13:56.349418    1273 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9c3b592c224e3349c2b2ee12637131a5d14173d733d371ef995bfbc1bedde987/crio-43a918c437b815fb082d959d1e092830d7e45dc88c5e5b41605475f22ac1f474 WatchSource:0}: Error finding container 43a918c437b815fb082d959d1e092830d7e45dc88c5e5b41605475f22ac1f474: Status 404 returned error can't find the container with id 43a918c437b815fb082d959d1e092830d7e45dc88c5e5b41605475f22ac1f474
	Dec 16 04:13:56 addons-266389 kubelet[1273]: I1216 04:13:56.519710    1273 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9583c8883c4a9b39c78f6319596b1db6b96fc2c050370204ad37df94e6c37bb6"
	Dec 16 04:13:56 addons-266389 kubelet[1273]: E1216 04:13:56.523014    1273 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879\" is forbidden: User \"system:node:addons-266389\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-266389' and this object" podUID="e23ef18b-cb6d-44ad-bbea-417158dfa7c6" pod="local-path-storage/helper-pod-delete-pvc-12852da6-9e8a-4765-8a93-15cde56a9879"
	
	
	==> storage-provisioner [198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b] <==
	W1216 04:13:32.377292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:34.381007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:34.385828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:36.389173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:36.394697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:38.398479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:38.405245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:40.408584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:40.413865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:42.417537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:42.422568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:44.426347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:44.435970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:46.445480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:46.450433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:48.453141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:48.457291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:50.461327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:50.465990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:52.469304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:52.473871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:54.477678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:54.482660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:56.486882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:13:56.491579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-266389 -n addons-266389
helpers_test.go:270: (dbg) Run:  kubectl --context addons-266389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: task-pv-pod ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974 registry-creds-764b6fb674-7cfhx
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-266389 describe pod task-pv-pod ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974 registry-creds-764b6fb674-7cfhx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-266389 describe pod task-pv-pod ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974 registry-creds-764b6fb674-7cfhx: exit status 1 (170.063172ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266389/192.168.49.2
	Start Time:       Tue, 16 Dec 2025 04:13:56 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          public.ecr.aws/nginx/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xd247 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-xd247:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/task-pv-pod to addons-266389
	  Normal  Pulling    1s    kubelet            Pulling image "public.ecr.aws/nginx/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-n7d4f" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8m974" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7cfhx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-266389 describe pod task-pv-pod ingress-nginx-admission-create-n7d4f ingress-nginx-admission-patch-8m974 registry-creds-764b6fb674-7cfhx: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable headlamp --alsologtostderr -v=1: exit status 11 (355.930919ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:57.632570  450000 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:57.633821  450000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:57.633871  450000 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:57.633895  450000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:57.634176  450000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:57.634523  450000 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:57.634959  450000 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:57.634999  450000 addons.go:622] checking whether the cluster is paused
	I1216 04:13:57.635229  450000 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:57.635262  450000 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:57.635901  450000 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:57.658523  450000 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:57.658580  450000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:57.689780  450000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:57.808698  450000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:57.808855  450000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:57.859511  450000 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:57.859574  450000 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:57.859606  450000 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:57.859624  450000 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:57.859657  450000 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:57.859683  450000 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:57.859701  450000 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:57.859719  450000 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:57.859747  450000 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:57.859771  450000 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:57.859790  450000 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:57.859807  450000 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:57.859838  450000 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:57.859860  450000 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:57.859878  450000 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:57.859917  450000 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:57.859946  450000 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:57.859970  450000 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:57.860001  450000 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:57.860023  450000 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:57.860045  450000 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:57.860062  450000 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:57.860089  450000 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:57.860117  450000 cri.go:89] found id: ""
	I1216 04:13:57.860198  450000 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:57.885673  450000 out.go:203] 
	W1216 04:13:57.888894  450000 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:57.888970  450000 out.go:285] * 
	* 
	W1216 04:13:57.894593  450000 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:57.897740  450000 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-z56bg" [d80c9f5f-6db6-4410-8951-9747e5cc2c23] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003220663s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (363.192813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:54.142987  449368 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:54.143769  449368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:54.143783  449368 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:54.143790  449368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:54.144230  449368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:54.144658  449368 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:54.145137  449368 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:54.145152  449368 addons.go:622] checking whether the cluster is paused
	I1216 04:13:54.145266  449368 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:54.145288  449368 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:54.149758  449368 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:54.171308  449368 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:54.171378  449368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:54.193812  449368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:54.295744  449368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:54.295816  449368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:54.343217  449368 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:54.343236  449368 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:54.343294  449368 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:54.343302  449368 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:54.343306  449368 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:54.343311  449368 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:54.343314  449368 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:54.343317  449368 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:54.343320  449368 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:54.343325  449368 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:54.343328  449368 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:54.343331  449368 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:54.343334  449368 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:54.343337  449368 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:54.343340  449368 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:54.343344  449368 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:54.343347  449368 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:54.343351  449368 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:54.343354  449368 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:54.343362  449368 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:54.343368  449368 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:54.343370  449368 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:54.343373  449368 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:54.343376  449368 cri.go:89] found id: ""
	I1216 04:13:54.343424  449368 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:54.364041  449368 out.go:203] 
	W1216 04:13:54.367009  449368 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:54.367049  449368 out.go:285] * 
	* 
	W1216 04:13:54.372905  449368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:54.375912  449368 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-266389 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-266389 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-266389 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [f53fd2b2-f1ed-4205-8a69-a4c86e63d984] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [f53fd2b2-f1ed-4205-8a69-a4c86e63d984] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [f53fd2b2-f1ed-4205-8a69-a4c86e63d984] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003087706s
addons_test.go:969: (dbg) Run:  kubectl --context addons-266389 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 ssh "cat /opt/local-path-provisioner/pvc-12852da6-9e8a-4765-8a93-15cde56a9879_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-266389 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-266389 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (266.313142ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:53.781190  449304 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:53.782303  449304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:53.782357  449304 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:53.782383  449304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:53.782682  449304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:53.783064  449304 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:53.783538  449304 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:53.783583  449304 addons.go:622] checking whether the cluster is paused
	I1216 04:13:53.783736  449304 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:53.783773  449304 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:53.784405  449304 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:53.803061  449304 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:53.803133  449304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:53.821506  449304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:53.919734  449304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:53.919825  449304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:53.949605  449304 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:53.949637  449304 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:53.949643  449304 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:53.949647  449304 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:53.949651  449304 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:53.949654  449304 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:53.949657  449304 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:53.949660  449304 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:53.949663  449304 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:53.949696  449304 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:53.949706  449304 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:53.949710  449304 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:53.949713  449304 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:53.949716  449304 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:53.949719  449304 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:53.949728  449304 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:53.949736  449304 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:53.949741  449304 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:53.949745  449304 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:53.949759  449304 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:53.949772  449304 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:53.949775  449304 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:53.949779  449304 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:53.949782  449304 cri.go:89] found id: ""
	I1216 04:13:53.949852  449304 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:53.967361  449304 out.go:203] 
	W1216 04:13:53.970480  449304 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:53.970507  449304 out.go:285] * 
	* 
	W1216 04:13:53.976045  449304 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:53.979804  449304 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-pj9b6" [e28680ad-287b-43c6-907a-fedf89ebc823] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006894081s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (327.479111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:45.356574  448881 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:45.357340  448881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:45.357360  448881 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:45.357368  448881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:45.357695  448881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:45.358085  448881 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:45.359251  448881 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:45.359337  448881 addons.go:622] checking whether the cluster is paused
	I1216 04:13:45.359531  448881 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:45.359574  448881 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:45.360342  448881 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:45.384605  448881 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:45.384667  448881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:45.412082  448881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:45.520934  448881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:45.521013  448881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:45.564684  448881 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:45.564708  448881 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:45.564714  448881 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:45.564718  448881 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:45.564721  448881 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:45.564726  448881 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:45.564729  448881 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:45.564733  448881 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:45.564736  448881 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:45.564745  448881 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:45.564748  448881 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:45.564752  448881 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:45.564755  448881 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:45.564758  448881 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:45.564762  448881 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:45.564767  448881 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:45.564774  448881 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:45.564779  448881 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:45.564783  448881 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:45.564786  448881 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:45.564791  448881 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:45.564794  448881 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:45.564798  448881 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:45.564801  448881 cri.go:89] found id: ""
	I1216 04:13:45.564863  448881 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:45.593677  448881 out.go:203] 
	W1216 04:13:45.596864  448881 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:45.596894  448881 out.go:285] * 
	* 
	W1216 04:13:45.602494  448881 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:45.606185  448881 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-vt9kv" [2bcf41fe-02d5-43bd-8890-54f1fd278b37] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003394097s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-266389 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-266389 addons disable yakd --alsologtostderr -v=1: exit status 11 (258.363268ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:13:39.064942  448786 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:13:39.065664  448786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:39.065680  448786 out.go:374] Setting ErrFile to fd 2...
	I1216 04:13:39.065687  448786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:13:39.066010  448786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:13:39.066350  448786 mustload.go:66] Loading cluster: addons-266389
	I1216 04:13:39.066841  448786 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:39.066867  448786 addons.go:622] checking whether the cluster is paused
	I1216 04:13:39.067057  448786 config.go:182] Loaded profile config "addons-266389": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:13:39.067100  448786 host.go:66] Checking if "addons-266389" exists ...
	I1216 04:13:39.067654  448786 cli_runner.go:164] Run: docker container inspect addons-266389 --format={{.State.Status}}
	I1216 04:13:39.086257  448786 ssh_runner.go:195] Run: systemctl --version
	I1216 04:13:39.086309  448786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-266389
	I1216 04:13:39.106850  448786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/addons-266389/id_rsa Username:docker}
	I1216 04:13:39.207754  448786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:13:39.207838  448786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:13:39.236928  448786 cri.go:89] found id: "12223ad1323870f818f0b7cea625afddb976f78080ae4e621b3fd1ff2b995448"
	I1216 04:13:39.236955  448786 cri.go:89] found id: "0b4f3c5e893d7d688ce11f0b735244ba259b54e71bb0db9def0c52ec4a6196f9"
	I1216 04:13:39.236961  448786 cri.go:89] found id: "c9070f308fd86dcb194863adfa25caf33b8078fea65c93e048532ca55252b149"
	I1216 04:13:39.236965  448786 cri.go:89] found id: "48496242e59c5f9fd20a3cf2cf095636b56060127d59b3be58fc376b11def80e"
	I1216 04:13:39.236968  448786 cri.go:89] found id: "a222cf871797573e3eef6577f6ec244cff60083f33108c17d0557e3e86447425"
	I1216 04:13:39.236971  448786 cri.go:89] found id: "52a17616824e66d4515c8cbbb81da1c20d581539ac23c2beef82414ca9a88947"
	I1216 04:13:39.236974  448786 cri.go:89] found id: "3efc9d422c0c3de3f0c64272d87beb7ec57afa5a06560678be6efac67b31933d"
	I1216 04:13:39.236977  448786 cri.go:89] found id: "6e3be5772ff866b353ef435e11207155aef5c771c6646b845dc44cc9b3d9cb09"
	I1216 04:13:39.236980  448786 cri.go:89] found id: "6e142dfc8491613286e72c104c9f425af802063a7d5b24e41e1838595313bb2e"
	I1216 04:13:39.236990  448786 cri.go:89] found id: "4da4c59550ee3f7f546b1db7feef77e6fa562227a4d5271dfd88d4570e8d338c"
	I1216 04:13:39.236994  448786 cri.go:89] found id: "66770881f17c90de3b600f64913cc2c32b0eb05f7cb745296b5164f65f09a274"
	I1216 04:13:39.236997  448786 cri.go:89] found id: "84135c3563dc8ab0260e1d74772acd0c35b8086172a765356fb152e5bf8b5e24"
	I1216 04:13:39.237000  448786 cri.go:89] found id: "698b79e9ff28b050843b01ac1aeb2d6713a37081b3a49970b450f2921b017d65"
	I1216 04:13:39.237003  448786 cri.go:89] found id: "63eba54ed2b9b909caf9b77d9444ec50a92a2378b5bf422082c3b8dc48b39db0"
	I1216 04:13:39.237006  448786 cri.go:89] found id: "8b24d28c9cf9a7beb168371e6f38a9785400279da370f6f8efb4a05f48438d5d"
	I1216 04:13:39.237013  448786 cri.go:89] found id: "b3d0766b0e4db2ffc9e9f10c2b01e4d77db5d64dfbccffc1110857435ec5bfc7"
	I1216 04:13:39.237016  448786 cri.go:89] found id: "198a5f79252ec17b2bf8a68340608fdf9bfecf10a3080c718dd6111e88423d4b"
	I1216 04:13:39.237020  448786 cri.go:89] found id: "71f0cfb9d95160d72af41a12a02bc8f629982dfa4d189cd54b07526a7b3e181e"
	I1216 04:13:39.237024  448786 cri.go:89] found id: "cb4b75c762835bc0ff06ad839888d274ddfa2ff22f5a66da96a878256510f39e"
	I1216 04:13:39.237027  448786 cri.go:89] found id: "9e53dfcedc5aeb84e277c13871ade0c23e5c74ce165d1d0da3876d153d91eda3"
	I1216 04:13:39.237032  448786 cri.go:89] found id: "4f4977c8f895c916508150e5f19d7e88942d5386ab444f08ad93547dc8af6a6d"
	I1216 04:13:39.237035  448786 cri.go:89] found id: "6fd0cf07fb5327a32581b61a3e659c921dddc24106a8e64fcec96dd3b5e2f628"
	I1216 04:13:39.237038  448786 cri.go:89] found id: "d27466cb0ef32bf527b69474e3e4fc84e401d10dc1a84ca2d828ee31735a89df"
	I1216 04:13:39.237041  448786 cri.go:89] found id: ""
	I1216 04:13:39.237119  448786 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 04:13:39.254815  448786 out.go:203] 
	W1216 04:13:39.259399  448786 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:13:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 04:13:39.259430  448786 out.go:285] * 
	* 
	W1216 04:13:39.265008  448786 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:13:39.269143  448786 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-266389 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (503.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 04:23:22.214385  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:23:49.922571  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.309132  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.315580  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.326970  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.348379  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.389733  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.471299  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.632848  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:24.954564  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:25.596687  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:26.878168  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:29.439670  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:34.561051  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:25:44.802445  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:05.283940  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:26:46.245463  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:28:08.169782  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:28:22.220391  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.820628159s)

                                                
                                                
-- stdout --
	* [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Found network options:
	  - HTTP_PROXY=localhost:34027
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34027 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-763073 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-763073 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001230058s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001351693s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001351693s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 6 (334.957038ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:29:35.629891  475406 status.go:458] kubeconfig endpoint: get endpoint: "functional-763073" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-861171 --kill=true                                                                                                                  │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ addons         │ functional-861171 addons list                                                                                                                     │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ addons         │ functional-861171 addons list -o json                                                                                                             │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ service        │ functional-861171 service hello-node-connect --url                                                                                                │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ start          │ -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ start          │ -p functional-861171 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                   │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ start          │ -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-861171 --alsologtostderr -v=1                                                                                    │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service list                                                                                                                    │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service list -o json                                                                                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service --namespace=default --https --url hello-node                                                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service hello-node --url --format={{.IP}}                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service hello-node --url                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format short --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format yaml --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ ssh            │ functional-861171 ssh pgrep buildkitd                                                                                                             │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ image          │ functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format json --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format table --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls                                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ delete         │ -p functional-861171                                                                                                                              │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ start          │ -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:21:13
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:21:13.508776  469820 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:21:13.508889  469820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:21:13.508894  469820 out.go:374] Setting ErrFile to fd 2...
	I1216 04:21:13.508898  469820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:21:13.509321  469820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:21:13.509840  469820 out.go:368] Setting JSON to false
	I1216 04:21:13.510686  469820 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11020,"bootTime":1765847854,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:21:13.510782  469820 start.go:143] virtualization:  
	I1216 04:21:13.515398  469820 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:21:13.520152  469820 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:21:13.520213  469820 notify.go:221] Checking for updates...
	I1216 04:21:13.527170  469820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:21:13.530486  469820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:21:13.533714  469820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:21:13.536875  469820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:21:13.540035  469820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:21:13.543355  469820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:21:13.581178  469820 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:21:13.581374  469820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:21:13.642185  469820 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 04:21:13.633010981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:21:13.642281  469820 docker.go:319] overlay module found
	I1216 04:21:13.645645  469820 out.go:179] * Using the docker driver based on user configuration
	I1216 04:21:13.648664  469820 start.go:309] selected driver: docker
	I1216 04:21:13.648690  469820 start.go:927] validating driver "docker" against <nil>
	I1216 04:21:13.648702  469820 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:21:13.649498  469820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:21:13.704500  469820 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 04:21:13.695489349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:21:13.704647  469820 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:21:13.704916  469820 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:21:13.707985  469820 out.go:179] * Using Docker driver with root privileges
	I1216 04:21:13.710904  469820 cni.go:84] Creating CNI manager for ""
	I1216 04:21:13.710959  469820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:21:13.710967  469820 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:21:13.711051  469820 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:21:13.714268  469820 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:21:13.717248  469820 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:21:13.720297  469820 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:21:13.723118  469820 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:21:13.723154  469820 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:21:13.723162  469820 cache.go:65] Caching tarball of preloaded images
	I1216 04:21:13.723206  469820 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:21:13.723245  469820 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:21:13.723255  469820 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:21:13.723605  469820 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:21:13.723624  469820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json: {Name:mk524dd77aa23a6fd2e623c56ffc4b8845967b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:13.742991  469820 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:21:13.743003  469820 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:21:13.743024  469820 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:21:13.743054  469820 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:21:13.743165  469820 start.go:364] duration metric: took 96.822µs to acquireMachinesLock for "functional-763073"
	I1216 04:21:13.743191  469820 start.go:93] Provisioning new machine with config: &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:21:13.743253  469820 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:21:13.748516  469820 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1216 04:21:13.748798  469820 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34027 to docker env.
	I1216 04:21:13.748825  469820 start.go:159] libmachine.API.Create for "functional-763073" (driver="docker")
	I1216 04:21:13.748848  469820 client.go:173] LocalClient.Create starting
	I1216 04:21:13.748909  469820 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem
	I1216 04:21:13.748939  469820 main.go:143] libmachine: Decoding PEM data...
	I1216 04:21:13.748962  469820 main.go:143] libmachine: Parsing certificate...
	I1216 04:21:13.749012  469820 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem
	I1216 04:21:13.749032  469820 main.go:143] libmachine: Decoding PEM data...
	I1216 04:21:13.749042  469820 main.go:143] libmachine: Parsing certificate...
	I1216 04:21:13.749492  469820 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:21:13.765323  469820 cli_runner.go:211] docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:21:13.765393  469820 network_create.go:284] running [docker network inspect functional-763073] to gather additional debugging logs...
	I1216 04:21:13.765408  469820 cli_runner.go:164] Run: docker network inspect functional-763073
	W1216 04:21:13.779574  469820 cli_runner.go:211] docker network inspect functional-763073 returned with exit code 1
	I1216 04:21:13.779596  469820 network_create.go:287] error running [docker network inspect functional-763073]: docker network inspect functional-763073: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-763073 not found
	I1216 04:21:13.779607  469820 network_create.go:289] output of [docker network inspect functional-763073]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-763073 not found
	
	** /stderr **
	I1216 04:21:13.779693  469820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:21:13.797328  469820 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001952190}
	I1216 04:21:13.797374  469820 network_create.go:124] attempt to create docker network functional-763073 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 04:21:13.797428  469820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-763073 functional-763073
	I1216 04:21:13.855804  469820 network_create.go:108] docker network functional-763073 192.168.49.0/24 created
	I1216 04:21:13.855824  469820 kic.go:121] calculated static IP "192.168.49.2" for the "functional-763073" container
	I1216 04:21:13.855912  469820 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:21:13.871093  469820 cli_runner.go:164] Run: docker volume create functional-763073 --label name.minikube.sigs.k8s.io=functional-763073 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:21:13.889636  469820 oci.go:103] Successfully created a docker volume functional-763073
	I1216 04:21:13.889717  469820 cli_runner.go:164] Run: docker run --rm --name functional-763073-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-763073 --entrypoint /usr/bin/test -v functional-763073:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 04:21:14.408231  469820 oci.go:107] Successfully prepared a docker volume functional-763073
	I1216 04:21:14.408294  469820 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:21:14.408302  469820 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:21:14.408377  469820 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-763073:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:21:18.495436  469820 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-763073:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (4.0870265s)
	I1216 04:21:18.495460  469820 kic.go:203] duration metric: took 4.087154059s to extract preloaded images to volume ...
	W1216 04:21:18.495623  469820 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 04:21:18.495726  469820 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:21:18.559468  469820 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-763073 --name functional-763073 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-763073 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-763073 --network functional-763073 --ip 192.168.49.2 --volume functional-763073:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
	I1216 04:21:18.882676  469820 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Running}}
	I1216 04:21:18.904897  469820 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:21:18.928228  469820 cli_runner.go:164] Run: docker exec functional-763073 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:21:18.977752  469820 oci.go:144] the created container "functional-763073" has a running status.
	I1216 04:21:18.977773  469820 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa...
	I1216 04:21:19.219476  469820 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:21:19.262703  469820 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:21:19.290840  469820 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:21:19.290852  469820 kic_runner.go:114] Args: [docker exec --privileged functional-763073 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:21:19.353388  469820 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:21:19.382249  469820 machine.go:94] provisionDockerMachine start ...
	I1216 04:21:19.382341  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:19.405301  469820 main.go:143] libmachine: Using SSH client type: native
	I1216 04:21:19.405628  469820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:21:19.405634  469820 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:21:19.406299  469820 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60006->127.0.0.1:33148: read: connection reset by peer
	I1216 04:21:22.540774  469820 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:21:22.540790  469820 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:21:22.540854  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:22.558836  469820 main.go:143] libmachine: Using SSH client type: native
	I1216 04:21:22.559150  469820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:21:22.559158  469820 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:21:22.702866  469820 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:21:22.702942  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:22.723388  469820 main.go:143] libmachine: Using SSH client type: native
	I1216 04:21:22.723692  469820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:21:22.723705  469820 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:21:22.857498  469820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:21:22.857530  469820 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:21:22.857551  469820 ubuntu.go:190] setting up certificates
	I1216 04:21:22.857559  469820 provision.go:84] configureAuth start
	I1216 04:21:22.857625  469820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:21:22.874889  469820 provision.go:143] copyHostCerts
	I1216 04:21:22.874949  469820 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:21:22.874957  469820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:21:22.875032  469820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:21:22.875177  469820 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:21:22.875181  469820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:21:22.875214  469820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:21:22.875268  469820 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:21:22.875271  469820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:21:22.875294  469820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:21:22.875338  469820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:21:23.129393  469820 provision.go:177] copyRemoteCerts
	I1216 04:21:23.129446  469820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:21:23.129491  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:23.146441  469820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:21:23.244898  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:21:23.262934  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:21:23.280368  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 04:21:23.298314  469820 provision.go:87] duration metric: took 440.733412ms to configureAuth
	I1216 04:21:23.298332  469820 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:21:23.298532  469820 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:21:23.298642  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:23.315766  469820 main.go:143] libmachine: Using SSH client type: native
	I1216 04:21:23.316073  469820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:21:23.316085  469820 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:21:23.617969  469820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:21:23.617984  469820 machine.go:97] duration metric: took 4.235720287s to provisionDockerMachine
	I1216 04:21:23.617993  469820 client.go:176] duration metric: took 9.869140536s to LocalClient.Create
	I1216 04:21:23.618005  469820 start.go:167] duration metric: took 9.869182186s to libmachine.API.Create "functional-763073"
	I1216 04:21:23.618011  469820 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:21:23.618022  469820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:21:23.618080  469820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:21:23.618118  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:23.635481  469820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:21:23.733138  469820 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:21:23.736445  469820 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:21:23.736463  469820 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:21:23.736473  469820 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:21:23.736526  469820 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:21:23.736615  469820 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:21:23.736693  469820 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:21:23.736745  469820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:21:23.744380  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:21:23.761904  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:21:23.779098  469820 start.go:296] duration metric: took 161.073691ms for postStartSetup
	I1216 04:21:23.779454  469820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:21:23.796753  469820 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:21:23.797026  469820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:21:23.797102  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:23.814129  469820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:21:23.906547  469820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:21:23.911762  469820 start.go:128] duration metric: took 10.168494215s to createHost
	I1216 04:21:23.911779  469820 start.go:83] releasing machines lock for "functional-763073", held for 10.168606166s
	I1216 04:21:23.911867  469820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:21:23.933275  469820 out.go:179] * Found network options:
	I1216 04:21:23.936474  469820 out.go:179]   - HTTP_PROXY=localhost:34027
	W1216 04:21:23.939380  469820 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1216 04:21:23.942345  469820 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1216 04:21:23.945280  469820 ssh_runner.go:195] Run: cat /version.json
	I1216 04:21:23.945322  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:23.945334  469820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:21:23.945384  469820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:21:23.972772  469820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:21:23.977324  469820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:21:24.162339  469820 ssh_runner.go:195] Run: systemctl --version
	I1216 04:21:24.168826  469820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:21:24.205734  469820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:21:24.210004  469820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:21:24.210078  469820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:21:24.237851  469820 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1216 04:21:24.237866  469820 start.go:496] detecting cgroup driver to use...
	I1216 04:21:24.237899  469820 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:21:24.237953  469820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:21:24.254652  469820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:21:24.267280  469820 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:21:24.267332  469820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:21:24.285031  469820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:21:24.306401  469820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:21:24.422772  469820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:21:24.542125  469820 docker.go:234] disabling docker service ...
	I1216 04:21:24.542192  469820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:21:24.563268  469820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:21:24.576386  469820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:21:24.708023  469820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:21:24.835007  469820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:21:24.848297  469820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:21:24.863314  469820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:21:24.863371  469820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:21:24.873118  469820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:21:24.873175  469820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:21:24.882953  469820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:21:24.891652  469820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:21:24.900647  469820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:21:24.909199  469820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:21:24.917928  469820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:21:24.931149  469820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:21:24.940007  469820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:21:24.947508  469820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:21:24.954650  469820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:21:25.065521  469820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:21:25.220618  469820 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:21:25.220678  469820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:21:25.224572  469820 start.go:564] Will wait 60s for crictl version
	I1216 04:21:25.224628  469820 ssh_runner.go:195] Run: which crictl
	I1216 04:21:25.228219  469820 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:21:25.254784  469820 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:21:25.254890  469820 ssh_runner.go:195] Run: crio --version
	I1216 04:21:25.286056  469820 ssh_runner.go:195] Run: crio --version
	I1216 04:21:25.318312  469820 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:21:25.320993  469820 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:21:25.337651  469820 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:21:25.341254  469820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:21:25.350954  469820 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:21:25.351060  469820 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:21:25.351116  469820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:21:25.383688  469820 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:21:25.383700  469820 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:21:25.383757  469820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:21:25.411776  469820 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:21:25.411788  469820 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:21:25.411795  469820 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:21:25.411884  469820 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:21:25.411966  469820 ssh_runner.go:195] Run: crio config
	I1216 04:21:25.476006  469820 cni.go:84] Creating CNI manager for ""
	I1216 04:21:25.476017  469820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:21:25.476039  469820 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:21:25.476061  469820 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:21:25.476186  469820 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:21:25.476259  469820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:21:25.483979  469820 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:21:25.484041  469820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:21:25.491722  469820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:21:25.506139  469820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:21:25.523067  469820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 04:21:25.535813  469820 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:21:25.539359  469820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:21:25.549138  469820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:21:25.657102  469820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:21:25.678846  469820 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:21:25.678857  469820 certs.go:195] generating shared ca certs ...
	I1216 04:21:25.678871  469820 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:25.679010  469820 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:21:25.679050  469820 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:21:25.679056  469820 certs.go:257] generating profile certs ...
	I1216 04:21:25.679116  469820 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:21:25.679125  469820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt with IP's: []
	I1216 04:21:25.901855  469820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt ...
	I1216 04:21:25.901872  469820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: {Name:mkabd0affdf65e040ffd129f24a95b8e52a757e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:25.902081  469820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key ...
	I1216 04:21:25.902088  469820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key: {Name:mk2715e1296258450e2e23b04e61d85e4a3e7395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:25.902176  469820 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:21:25.902187  469820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt.8a462195 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 04:21:26.108173  469820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt.8a462195 ...
	I1216 04:21:26.108190  469820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt.8a462195: {Name:mkbb91545cc919e3cd26af1a664e0888f663961a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:26.108375  469820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195 ...
	I1216 04:21:26.108384  469820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195: {Name:mk1fd83fca490ef0c4f958ef185056f806dd98d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:26.108469  469820 certs.go:382] copying /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt.8a462195 -> /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt
	I1216 04:21:26.108541  469820 certs.go:386] copying /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195 -> /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key
	I1216 04:21:26.108590  469820 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:21:26.108603  469820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt with IP's: []
	I1216 04:21:26.525962  469820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt ...
	I1216 04:21:26.525979  469820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt: {Name:mk83622e09f85fc40eca647c64dc8e11d442112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:26.526165  469820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key ...
	I1216 04:21:26.526175  469820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key: {Name:mk36a295a8bb2429034990f630577eb811e5257e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:21:26.526369  469820 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:21:26.526410  469820 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:21:26.526417  469820 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:21:26.526443  469820 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:21:26.526466  469820 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:21:26.526491  469820 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:21:26.526539  469820 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:21:26.527085  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:21:26.545661  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:21:26.563579  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:21:26.590306  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:21:26.611419  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:21:26.632685  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:21:26.651175  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:21:26.668657  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:21:26.686444  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:21:26.704527  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:21:26.721838  469820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:21:26.739897  469820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:21:26.752511  469820 ssh_runner.go:195] Run: openssl version
	I1216 04:21:26.758956  469820 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:21:26.766313  469820 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:21:26.773462  469820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:21:26.777138  469820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:21:26.777206  469820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:21:26.817884  469820 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:21:26.825278  469820 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:21:26.832551  469820 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:21:26.839768  469820 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:21:26.847210  469820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:21:26.851093  469820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:21:26.851151  469820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:21:26.891934  469820 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:21:26.899278  469820 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/441727.pem /etc/ssl/certs/51391683.0
	I1216 04:21:26.906611  469820 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:21:26.913820  469820 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:21:26.921045  469820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:21:26.924815  469820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:21:26.924873  469820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:21:26.966111  469820 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:21:26.973486  469820 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4417272.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:21:26.980596  469820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:21:26.984003  469820 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:21:26.984045  469820 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:21:26.984123  469820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:21:26.984181  469820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:21:27.012835  469820 cri.go:89] found id: ""
	I1216 04:21:27.012901  469820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:21:27.021046  469820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:21:27.028899  469820 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:21:27.028951  469820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:21:27.036625  469820 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:21:27.036635  469820 kubeadm.go:158] found existing configuration files:
	
	I1216 04:21:27.036711  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:21:27.044518  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:21:27.044576  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:21:27.052097  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:21:27.059999  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:21:27.060072  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:21:27.069027  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:21:27.076891  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:21:27.076965  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:21:27.085020  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:21:27.093774  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:21:27.093837  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:21:27.101777  469820 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:21:27.141626  469820 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:21:27.141695  469820 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:21:27.212219  469820 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:21:27.212284  469820 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:21:27.212319  469820 kubeadm.go:319] OS: Linux
	I1216 04:21:27.212363  469820 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:21:27.212411  469820 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:21:27.212457  469820 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:21:27.212510  469820 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:21:27.212563  469820 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:21:27.212614  469820 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:21:27.212667  469820 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:21:27.212714  469820 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:21:27.212761  469820 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:21:27.293016  469820 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:21:27.293161  469820 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:21:27.293290  469820 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:21:27.309737  469820 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:21:27.316275  469820 out.go:252]   - Generating certificates and keys ...
	I1216 04:21:27.316370  469820 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:21:27.316435  469820 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:21:27.836398  469820 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:21:28.070056  469820 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:21:28.359063  469820 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:21:28.536697  469820 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:21:28.741800  469820 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:21:28.742025  469820 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-763073 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:21:28.833491  469820 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:21:28.833862  469820 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-763073 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:21:29.174136  469820 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:21:29.949827  469820 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:21:30.133419  469820 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:21:30.133745  469820 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:21:30.361853  469820 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:21:30.591876  469820 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:21:30.990642  469820 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:21:31.565841  469820 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:21:32.049195  469820 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:21:32.049949  469820 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:21:32.055163  469820 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:21:32.058908  469820 out.go:252]   - Booting up control plane ...
	I1216 04:21:32.059021  469820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:21:32.059099  469820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:21:32.059165  469820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:21:32.075529  469820 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:21:32.075844  469820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:21:32.083980  469820 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:21:32.084294  469820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:21:32.084498  469820 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:21:32.223011  469820 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:21:32.223125  469820 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:25:32.224622  469820 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001230058s
	I1216 04:25:32.224647  469820 kubeadm.go:319] 
	I1216 04:25:32.224768  469820 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:25:32.224897  469820 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:25:32.225345  469820 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:25:32.225356  469820 kubeadm.go:319] 
	I1216 04:25:32.225593  469820 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:25:32.225668  469820 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:25:32.225734  469820 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:25:32.225738  469820 kubeadm.go:319] 
	I1216 04:25:32.230533  469820 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:25:32.231051  469820 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:25:32.231188  469820 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:25:32.231457  469820 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:25:32.231462  469820 kubeadm.go:319] 
	I1216 04:25:32.231530  469820 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:25:32.231667  469820 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-763073 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-763073 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001230058s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:25:32.232780  469820 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:25:32.640672  469820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:25:32.653899  469820 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:25:32.653956  469820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:25:32.661883  469820 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:25:32.661892  469820 kubeadm.go:158] found existing configuration files:
	
	I1216 04:25:32.661953  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:25:32.669846  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:25:32.669912  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:25:32.677401  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:25:32.685181  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:25:32.685244  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:25:32.692859  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:25:32.700685  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:25:32.700747  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:25:32.708080  469820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:25:32.715690  469820 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:25:32.715751  469820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:25:32.723405  469820 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:25:32.761701  469820 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:25:32.762045  469820 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:25:32.840302  469820 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:25:32.840363  469820 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:25:32.840395  469820 kubeadm.go:319] OS: Linux
	I1216 04:25:32.840437  469820 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:25:32.840481  469820 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:25:32.840525  469820 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:25:32.840569  469820 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:25:32.840613  469820 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:25:32.840658  469820 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:25:32.840700  469820 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:25:32.840744  469820 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:25:32.840787  469820 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:25:32.913105  469820 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:25:32.913205  469820 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:25:32.913311  469820 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:25:32.920841  469820 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:25:32.926166  469820 out.go:252]   - Generating certificates and keys ...
	I1216 04:25:32.926259  469820 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:25:32.926330  469820 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:25:32.926411  469820 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:25:32.926477  469820 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:25:32.926552  469820 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:25:32.926611  469820 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:25:32.926679  469820 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:25:32.926739  469820 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:25:32.926818  469820 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:25:32.926904  469820 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:25:32.927117  469820 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:25:32.927181  469820 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:25:33.245337  469820 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:25:33.357319  469820 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:25:34.189566  469820 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:25:34.465512  469820 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:25:34.675927  469820 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:25:34.676515  469820 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:25:34.679205  469820 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:25:34.682524  469820 out.go:252]   - Booting up control plane ...
	I1216 04:25:34.682624  469820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:25:34.682704  469820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:25:34.682766  469820 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:25:34.699695  469820 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:25:34.699798  469820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:25:34.708007  469820 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:25:34.708252  469820 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:25:34.708404  469820 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:25:34.845656  469820 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:25:34.845774  469820 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:29:34.845418  469820 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001351693s
	I1216 04:29:34.850673  469820 kubeadm.go:319] 
	I1216 04:29:34.850750  469820 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:29:34.850787  469820 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:29:34.850899  469820 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:29:34.850903  469820 kubeadm.go:319] 
	I1216 04:29:34.851015  469820 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:29:34.851049  469820 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:29:34.851080  469820 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:29:34.851084  469820 kubeadm.go:319] 
	I1216 04:29:34.859037  469820 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:29:34.859510  469820 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:29:34.859635  469820 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:29:34.859928  469820 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 04:29:34.859933  469820 kubeadm.go:319] 
	I1216 04:29:34.860028  469820 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:29:34.860074  469820 kubeadm.go:403] duration metric: took 8m7.876033697s to StartCluster
	I1216 04:29:34.860125  469820 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:29:34.860193  469820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:29:34.888240  469820 cri.go:89] found id: ""
	I1216 04:29:34.888263  469820 logs.go:282] 0 containers: []
	W1216 04:29:34.888270  469820 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:29:34.888275  469820 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:29:34.888331  469820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:29:34.913744  469820 cri.go:89] found id: ""
	I1216 04:29:34.913768  469820 logs.go:282] 0 containers: []
	W1216 04:29:34.913776  469820 logs.go:284] No container was found matching "etcd"
	I1216 04:29:34.913788  469820 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:29:34.913853  469820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:29:34.939530  469820 cri.go:89] found id: ""
	I1216 04:29:34.939544  469820 logs.go:282] 0 containers: []
	W1216 04:29:34.939551  469820 logs.go:284] No container was found matching "coredns"
	I1216 04:29:34.939556  469820 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:29:34.939618  469820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:29:34.967519  469820 cri.go:89] found id: ""
	I1216 04:29:34.967532  469820 logs.go:282] 0 containers: []
	W1216 04:29:34.967539  469820 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:29:34.967545  469820 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:29:34.967602  469820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:29:34.992965  469820 cri.go:89] found id: ""
	I1216 04:29:34.992978  469820 logs.go:282] 0 containers: []
	W1216 04:29:34.992985  469820 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:29:34.992994  469820 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:29:34.993050  469820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:29:35.022535  469820 cri.go:89] found id: ""
	I1216 04:29:35.022550  469820 logs.go:282] 0 containers: []
	W1216 04:29:35.022557  469820 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:29:35.022563  469820 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:29:35.022624  469820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:29:35.048051  469820 cri.go:89] found id: ""
	I1216 04:29:35.048065  469820 logs.go:282] 0 containers: []
	W1216 04:29:35.048072  469820 logs.go:284] No container was found matching "kindnet"
	I1216 04:29:35.048080  469820 logs.go:123] Gathering logs for kubelet ...
	I1216 04:29:35.048090  469820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:29:35.115818  469820 logs.go:123] Gathering logs for dmesg ...
	I1216 04:29:35.115839  469820 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:29:35.131078  469820 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:29:35.131095  469820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:29:35.198094  469820 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:29:35.188779    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.189587    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.191128    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.191693    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.193377    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:29:35.188779    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.189587    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.191128    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.191693    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:35.193377    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:29:35.198106  469820 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:29:35.198116  469820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:29:35.229274  469820 logs.go:123] Gathering logs for container status ...
	I1216 04:29:35.229293  469820 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:29:35.258549  469820 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001351693s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:29:35.258588  469820 out.go:285] * 
	W1216 04:29:35.258648  469820 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001351693s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:29:35.258665  469820 out.go:285] * 
	W1216 04:29:35.260787  469820 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:29:35.266860  469820 out.go:203] 
	W1216 04:29:35.270653  469820 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001351693s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:29:35.270713  469820 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:29:35.270737  469820 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:29:35.273965  469820 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214188708Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214225516Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214282567Z" level=info msg="Create NRI interface"
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.21438464Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214394224Z" level=info msg="runtime interface created"
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214407918Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214414015Z" level=info msg="runtime interface starting up..."
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214419471Z" level=info msg="starting plugins..."
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.21443342Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:21:25 functional-763073 crio[841]: time="2025-12-16T04:21:25.214498742Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:21:25 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 04:21:27 functional-763073 crio[841]: time="2025-12-16T04:21:27.298323501Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=58e9d499-b3c7-418f-b112-19bc4858d1d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:21:27 functional-763073 crio[841]: time="2025-12-16T04:21:27.301698427Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=70bc811b-34d5-4f66-8cd4-530b117d3855 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:21:27 functional-763073 crio[841]: time="2025-12-16T04:21:27.302253428Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=07edb167-e9d5-4caa-a51c-57da20d16dd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:21:27 functional-763073 crio[841]: time="2025-12-16T04:21:27.302764867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4ed4a103-b57d-4b01-8150-fde7262c7365 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:21:27 functional-763073 crio[841]: time="2025-12-16T04:21:27.303293881Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=89744c6a-8cdf-45a4-9856-b27a2c2a3c2d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:21:27 functional-763073 crio[841]: time="2025-12-16T04:21:27.303869821Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f180c0c1-57be-41b2-93a0-50c0c0a5c434 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:21:27 functional-763073 crio[841]: time="2025-12-16T04:21:27.304435989Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=46021461-a81e-42e4-afc1-7f70018a0480 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:25:32 functional-763073 crio[841]: time="2025-12-16T04:25:32.916680739Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=94003aba-8e81-4b13-a8cc-51b0a05e9eb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:25:32 functional-763073 crio[841]: time="2025-12-16T04:25:32.917604901Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0eb0cc0a-904b-422b-8386-4aa6ee00c59e name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:25:32 functional-763073 crio[841]: time="2025-12-16T04:25:32.918077012Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=f8271aee-394d-46e1-b8b3-8b90a498858c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:25:32 functional-763073 crio[841]: time="2025-12-16T04:25:32.91852002Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a9d92283-c2be-4d37-b7b1-30847b78a789 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:25:32 functional-763073 crio[841]: time="2025-12-16T04:25:32.918994519Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0f068c2f-6109-4d51-b24b-88590bd93f1c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:25:32 functional-763073 crio[841]: time="2025-12-16T04:25:32.919418515Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=48a22f22-0f6d-4c29-8590-751ee8bdfb81 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:25:32 functional-763073 crio[841]: time="2025-12-16T04:25:32.919905519Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f3871836-4f7b-4505-ab78-389d837d7472 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:29:36.259581    4992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:36.260395    4992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:36.261878    4992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:36.262338    4992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:29:36.263564    4992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:29:36 up  3:12,  0 user,  load average: 0.09, 0.42, 1.05
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:29:34 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:29:34 functional-763073 kubelet[4802]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:29:34 functional-763073 kubelet[4802]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:29:34 functional-763073 kubelet[4802]: E1216 04:29:34.117992    4802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:29:34 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:29:34 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:29:34 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 16 04:29:34 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:29:34 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:29:34 functional-763073 kubelet[4807]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:29:34 functional-763073 kubelet[4807]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:29:34 functional-763073 kubelet[4807]: E1216 04:29:34.873602    4807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:29:34 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:29:34 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:29:35 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 16 04:29:35 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:29:35 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:29:35 functional-763073 kubelet[4907]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:29:35 functional-763073 kubelet[4907]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:29:35 functional-763073 kubelet[4907]: E1216 04:29:35.606856    4907 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:29:35 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:29:35 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:29:36 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 16 04:29:36 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:29:36 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 6 (338.266922ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:29:36.735113  475624 status.go:458] kubeconfig endpoint: get endpoint: "functional-763073" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (503.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 04:29:36.752597  441727 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-763073 --alsologtostderr -v=8
E1216 04:30:24.308793  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:30:52.011629  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:22.214112  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:34:45.284820  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:35:24.308573  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-763073 --alsologtostderr -v=8: exit status 80 (6m6.09156732s)

                                                
                                                
-- stdout --
	* [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:29:36.794313  475694 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:29:36.794434  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794446  475694 out.go:374] Setting ErrFile to fd 2...
	I1216 04:29:36.794452  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794700  475694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:29:36.795091  475694 out.go:368] Setting JSON to false
	I1216 04:29:36.795948  475694 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11523,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:29:36.796022  475694 start.go:143] virtualization:  
	I1216 04:29:36.799564  475694 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:29:36.803377  475694 notify.go:221] Checking for updates...
	I1216 04:29:36.806471  475694 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:29:36.809418  475694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:29:36.812382  475694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:36.815368  475694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:29:36.818384  475694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:29:36.821299  475694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:29:36.824780  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:36.824898  475694 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:29:36.853440  475694 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:29:36.853553  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.911081  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.901976085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.911198  475694 docker.go:319] overlay module found
	I1216 04:29:36.914378  475694 out.go:179] * Using the docker driver based on existing profile
	I1216 04:29:36.917157  475694 start.go:309] selected driver: docker
	I1216 04:29:36.917180  475694 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.917338  475694 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:29:36.917450  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.970986  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.961820507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.971442  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:36.971503  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:36.971553  475694 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.974751  475694 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:29:36.977516  475694 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:29:36.980431  475694 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:29:36.983493  475694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:29:36.983530  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:36.983585  475694 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:29:36.983595  475694 cache.go:65] Caching tarball of preloaded images
	I1216 04:29:36.983676  475694 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:29:36.983683  475694 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:29:36.983782  475694 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:29:37.009018  475694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:29:37.009047  475694 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:29:37.009096  475694 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:29:37.009136  475694 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:29:37.009247  475694 start.go:364] duration metric: took 82.708µs to acquireMachinesLock for "functional-763073"
	I1216 04:29:37.009287  475694 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:29:37.009293  475694 fix.go:54] fixHost starting: 
	I1216 04:29:37.009582  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:37.028726  475694 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:29:37.028764  475694 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:29:37.032201  475694 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:29:37.032251  475694 machine.go:94] provisionDockerMachine start ...
	I1216 04:29:37.032362  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.050328  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.050673  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.050689  475694 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:29:37.192783  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.192826  475694 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:29:37.192931  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.211313  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.211628  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.211639  475694 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:29:37.354192  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.354269  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.376898  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.377254  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.377278  475694 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:29:37.509279  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:29:37.509306  475694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:29:37.509326  475694 ubuntu.go:190] setting up certificates
	I1216 04:29:37.509346  475694 provision.go:84] configureAuth start
	I1216 04:29:37.509406  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:37.527206  475694 provision.go:143] copyHostCerts
	I1216 04:29:37.527264  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527308  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:29:37.527320  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527395  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:29:37.527487  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527509  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:29:37.527517  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527545  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:29:37.527594  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527615  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:29:37.527622  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527648  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:29:37.527699  475694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:29:37.800879  475694 provision.go:177] copyRemoteCerts
	I1216 04:29:37.800949  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:29:37.800990  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.823288  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:37.920869  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 04:29:37.920929  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:29:37.938521  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 04:29:37.938583  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:29:37.956377  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 04:29:37.956439  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:29:37.974119  475694 provision.go:87] duration metric: took 464.750518ms to configureAuth
	I1216 04:29:37.974148  475694 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:29:37.974331  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:37.974450  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.991914  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.992233  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.992254  475694 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:29:38.308392  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:29:38.308467  475694 machine.go:97] duration metric: took 1.27620546s to provisionDockerMachine
	I1216 04:29:38.308501  475694 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:29:38.308543  475694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:29:38.308636  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:29:38.308736  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.327973  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.425975  475694 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:29:38.429465  475694 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:29:38.429486  475694 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:29:38.429491  475694 command_runner.go:130] > VERSION_ID="12"
	I1216 04:29:38.429495  475694 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:29:38.429500  475694 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:29:38.429503  475694 command_runner.go:130] > ID=debian
	I1216 04:29:38.429508  475694 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:29:38.429575  475694 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:29:38.429584  475694 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:29:38.429642  475694 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:29:38.429664  475694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:29:38.429675  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:29:38.429740  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:29:38.429824  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:29:38.429840  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /etc/ssl/certs/4417272.pem
	I1216 04:29:38.429918  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:29:38.429926  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> /etc/test/nested/copy/441727/hosts
	I1216 04:29:38.429973  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:29:38.438164  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:38.456472  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:29:38.474815  475694 start.go:296] duration metric: took 166.27897ms for postStartSetup
	I1216 04:29:38.474942  475694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:29:38.475008  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.493257  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.586186  475694 command_runner.go:130] > 13%
	I1216 04:29:38.586744  475694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:29:38.591214  475694 command_runner.go:130] > 169G
	I1216 04:29:38.591631  475694 fix.go:56] duration metric: took 1.582334669s for fixHost
	I1216 04:29:38.591655  475694 start.go:83] releasing machines lock for "functional-763073", held for 1.582392532s
	I1216 04:29:38.591756  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:38.610497  475694 ssh_runner.go:195] Run: cat /version.json
	I1216 04:29:38.610580  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.610804  475694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:29:38.610862  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.644780  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.648235  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.740654  475694 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 04:29:38.740792  475694 ssh_runner.go:195] Run: systemctl --version
	I1216 04:29:38.835621  475694 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 04:29:38.838633  475694 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:29:38.838716  475694 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:29:38.838811  475694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:29:38.876422  475694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:29:38.880827  475694 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:29:38.881001  475694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:29:38.881102  475694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:29:38.888966  475694 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:29:38.888992  475694 start.go:496] detecting cgroup driver to use...
	I1216 04:29:38.889023  475694 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:29:38.889116  475694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:29:38.904919  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:29:38.918230  475694 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:29:38.918296  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:29:38.934386  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:29:38.947903  475694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:29:39.064725  475694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:29:39.186461  475694 docker.go:234] disabling docker service ...
	I1216 04:29:39.186555  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:29:39.201259  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:29:39.214213  475694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:29:39.331697  475694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:29:39.468929  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:29:39.481743  475694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:29:39.494008  475694 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 04:29:39.494807  475694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:29:39.494889  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.503668  475694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:29:39.503751  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.513027  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.521738  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.530476  475694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:29:39.538796  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.547730  475694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.556341  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.565046  475694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:29:39.571643  475694 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:29:39.572565  475694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:29:39.579896  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:39.695396  475694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:29:39.852818  475694 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:29:39.852930  475694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:29:39.856967  475694 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 04:29:39.856989  475694 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:29:39.856996  475694 command_runner.go:130] > Device: 0,72	Inode: 1641        Links: 1
	I1216 04:29:39.857013  475694 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:39.857019  475694 command_runner.go:130] > Access: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857028  475694 command_runner.go:130] > Modify: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857036  475694 command_runner.go:130] > Change: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857040  475694 command_runner.go:130] >  Birth: -
	I1216 04:29:39.857332  475694 start.go:564] Will wait 60s for crictl version
	I1216 04:29:39.857393  475694 ssh_runner.go:195] Run: which crictl
	I1216 04:29:39.860635  475694 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:29:39.860907  475694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:29:39.883882  475694 command_runner.go:130] > Version:  0.1.0
	I1216 04:29:39.883905  475694 command_runner.go:130] > RuntimeName:  cri-o
	I1216 04:29:39.883910  475694 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 04:29:39.883916  475694 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:29:39.886266  475694 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:29:39.886355  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.912976  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.913004  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.913011  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.913016  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.913021  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.913026  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.913030  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.913034  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.913044  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.913048  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.913052  475694 command_runner.go:130] >      static
	I1216 04:29:39.913055  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.913059  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.913089  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.913094  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.913097  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.913101  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.913104  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.913108  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.913112  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.915574  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.945490  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.945513  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.945520  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.945525  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.945530  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.945534  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.945538  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.945543  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.945548  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.945551  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.945557  475694 command_runner.go:130] >      static
	I1216 04:29:39.945561  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.945587  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.945594  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.945598  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.945601  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.945607  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.945617  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.945623  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.945639  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.952832  475694 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:29:39.955738  475694 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:29:39.972578  475694 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:29:39.976813  475694 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 04:29:39.976940  475694 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:29:39.977085  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:39.977157  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.026676  475694 command_runner.go:130] > {
	I1216 04:29:40.026700  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.026707  475694 command_runner.go:130] >     {
	I1216 04:29:40.026715  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.026721  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026727  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.026731  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026736  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026745  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.026758  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.026762  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026770  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.026775  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026789  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026796  475694 command_runner.go:130] >     },
	I1216 04:29:40.026800  475694 command_runner.go:130] >     {
	I1216 04:29:40.026807  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.026815  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026820  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.026827  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026831  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026843  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.026852  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.026859  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026863  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.026867  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026879  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026883  475694 command_runner.go:130] >     },
	I1216 04:29:40.026895  475694 command_runner.go:130] >     {
	I1216 04:29:40.026906  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.026917  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026927  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.026930  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026934  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026942  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.026954  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.026962  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026966  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.026974  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.026979  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026985  475694 command_runner.go:130] >     },
	I1216 04:29:40.026988  475694 command_runner.go:130] >     {
	I1216 04:29:40.026995  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.027002  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027012  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.027019  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027023  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027031  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.027041  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.027047  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027052  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.027058  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027063  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027070  475694 command_runner.go:130] >       },
	I1216 04:29:40.027084  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027096  475694 command_runner.go:130] >     },
	I1216 04:29:40.027100  475694 command_runner.go:130] >     {
	I1216 04:29:40.027106  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.027114  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027119  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.027129  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027138  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027146  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.027157  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.027161  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027168  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.027171  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027175  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027183  475694 command_runner.go:130] >       },
	I1216 04:29:40.027187  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027192  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027200  475694 command_runner.go:130] >     },
	I1216 04:29:40.027203  475694 command_runner.go:130] >     {
	I1216 04:29:40.027214  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.027229  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027235  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.027241  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027245  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027254  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.027266  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.027269  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027278  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.027281  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027288  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027292  475694 command_runner.go:130] >       },
	I1216 04:29:40.027300  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027305  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027311  475694 command_runner.go:130] >     },
	I1216 04:29:40.027314  475694 command_runner.go:130] >     {
	I1216 04:29:40.027320  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.027324  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027333  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.027337  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027345  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027357  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.027366  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.027372  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027376  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.027384  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027389  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027395  475694 command_runner.go:130] >     },
	I1216 04:29:40.027399  475694 command_runner.go:130] >     {
	I1216 04:29:40.027405  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.027409  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027423  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.027430  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027434  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027442  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.027466  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.027473  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027478  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.027485  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027489  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027492  475694 command_runner.go:130] >       },
	I1216 04:29:40.027498  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027507  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027514  475694 command_runner.go:130] >     },
	I1216 04:29:40.027517  475694 command_runner.go:130] >     {
	I1216 04:29:40.027524  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.027531  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027536  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.027542  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027547  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027557  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.027568  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.027573  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027586  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.027593  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027598  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.027601  475694 command_runner.go:130] >       },
	I1216 04:29:40.027610  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027614  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.027620  475694 command_runner.go:130] >     }
	I1216 04:29:40.027623  475694 command_runner.go:130] >   ]
	I1216 04:29:40.027626  475694 command_runner.go:130] > }
	I1216 04:29:40.029894  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.029927  475694 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:29:40.029987  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.055653  475694 command_runner.go:130] > {
	I1216 04:29:40.055673  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.055678  475694 command_runner.go:130] >     {
	I1216 04:29:40.055687  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.055692  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055697  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.055701  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055705  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055715  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.055724  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.055728  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055732  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.055736  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055740  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055744  475694 command_runner.go:130] >     },
	I1216 04:29:40.055747  475694 command_runner.go:130] >     {
	I1216 04:29:40.055753  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.055757  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055762  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.055765  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055769  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055787  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.055795  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.055798  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055802  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.055806  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055817  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055820  475694 command_runner.go:130] >     },
	I1216 04:29:40.055824  475694 command_runner.go:130] >     {
	I1216 04:29:40.055830  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.055833  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055838  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.055841  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055845  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055854  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.055862  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.055865  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055869  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.055873  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.055876  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055879  475694 command_runner.go:130] >     },
	I1216 04:29:40.055882  475694 command_runner.go:130] >     {
	I1216 04:29:40.055891  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.055894  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055899  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.055904  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055908  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055915  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.055923  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.055926  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055929  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.055933  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.055937  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.055940  475694 command_runner.go:130] >       },
	I1216 04:29:40.055952  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055956  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055959  475694 command_runner.go:130] >     },
	I1216 04:29:40.055961  475694 command_runner.go:130] >     {
	I1216 04:29:40.055968  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.055971  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055976  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.055979  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055983  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055990  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.055998  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.056001  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056005  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.056008  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056012  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056015  475694 command_runner.go:130] >       },
	I1216 04:29:40.056018  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056022  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056024  475694 command_runner.go:130] >     },
	I1216 04:29:40.056027  475694 command_runner.go:130] >     {
	I1216 04:29:40.056033  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.056037  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056043  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.056045  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056049  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056057  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.056065  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.056068  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056072  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.056075  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056079  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056082  475694 command_runner.go:130] >       },
	I1216 04:29:40.056085  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056096  475694 command_runner.go:130] >     },
	I1216 04:29:40.056099  475694 command_runner.go:130] >     {
	I1216 04:29:40.056106  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.056110  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056115  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.056118  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056122  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056130  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.056137  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.056141  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056144  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.056148  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056152  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056155  475694 command_runner.go:130] >     },
	I1216 04:29:40.056158  475694 command_runner.go:130] >     {
	I1216 04:29:40.056164  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.056168  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056173  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.056176  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056180  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056188  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.056204  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.056207  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056211  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.056215  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056218  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056221  475694 command_runner.go:130] >       },
	I1216 04:29:40.056225  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056228  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056231  475694 command_runner.go:130] >     },
	I1216 04:29:40.056233  475694 command_runner.go:130] >     {
	I1216 04:29:40.056240  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.056247  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056251  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.056255  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056259  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056266  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.056278  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.056281  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056285  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.056289  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056293  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.056296  475694 command_runner.go:130] >       },
	I1216 04:29:40.056299  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056303  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.056305  475694 command_runner.go:130] >     }
	I1216 04:29:40.056308  475694 command_runner.go:130] >   ]
	I1216 04:29:40.056312  475694 command_runner.go:130] > }
	I1216 04:29:40.057842  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.057866  475694 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:29:40.057874  475694 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:29:40.058028  475694 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:29:40.058117  475694 ssh_runner.go:195] Run: crio config
	I1216 04:29:40.108801  475694 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 04:29:40.108825  475694 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 04:29:40.108833  475694 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 04:29:40.108837  475694 command_runner.go:130] > #
	I1216 04:29:40.108844  475694 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 04:29:40.108850  475694 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 04:29:40.108857  475694 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 04:29:40.108874  475694 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 04:29:40.108891  475694 command_runner.go:130] > # reload'.
	I1216 04:29:40.108898  475694 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 04:29:40.108905  475694 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 04:29:40.108915  475694 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 04:29:40.108922  475694 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 04:29:40.108925  475694 command_runner.go:130] > [crio]
	I1216 04:29:40.108932  475694 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 04:29:40.108939  475694 command_runner.go:130] > # containers images, in this directory.
	I1216 04:29:40.109485  475694 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 04:29:40.109505  475694 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 04:29:40.110050  475694 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 04:29:40.110069  475694 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 04:29:40.110418  475694 command_runner.go:130] > # imagestore = ""
	I1216 04:29:40.110434  475694 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 04:29:40.110442  475694 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 04:29:40.110623  475694 command_runner.go:130] > # storage_driver = "overlay"
	I1216 04:29:40.110671  475694 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 04:29:40.110692  475694 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 04:29:40.110809  475694 command_runner.go:130] > # storage_option = [
	I1216 04:29:40.110816  475694 command_runner.go:130] > # ]
	I1216 04:29:40.110824  475694 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 04:29:40.110831  475694 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 04:29:40.110973  475694 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 04:29:40.110983  475694 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 04:29:40.111015  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 04:29:40.111021  475694 command_runner.go:130] > # always happen on a node reboot
	I1216 04:29:40.111194  475694 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 04:29:40.111214  475694 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 04:29:40.111221  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 04:29:40.111260  475694 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 04:29:40.111402  475694 command_runner.go:130] > # version_file_persist = ""
	I1216 04:29:40.111414  475694 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 04:29:40.111423  475694 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 04:29:40.111428  475694 command_runner.go:130] > # internal_wipe = true
	I1216 04:29:40.111436  475694 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 04:29:40.111471  475694 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 04:29:40.111604  475694 command_runner.go:130] > # internal_repair = true
	I1216 04:29:40.111614  475694 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 04:29:40.111621  475694 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 04:29:40.111626  475694 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 04:29:40.111750  475694 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 04:29:40.111761  475694 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 04:29:40.111764  475694 command_runner.go:130] > [crio.api]
	I1216 04:29:40.111770  475694 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 04:29:40.111973  475694 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 04:29:40.111983  475694 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 04:29:40.112123  475694 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 04:29:40.112134  475694 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 04:29:40.112139  475694 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 04:29:40.112334  475694 command_runner.go:130] > # stream_port = "0"
	I1216 04:29:40.112344  475694 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 04:29:40.112496  475694 command_runner.go:130] > # stream_enable_tls = false
	I1216 04:29:40.112506  475694 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 04:29:40.112646  475694 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 04:29:40.112658  475694 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 04:29:40.112664  475694 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112790  475694 command_runner.go:130] > # stream_tls_cert = ""
	I1216 04:29:40.112800  475694 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 04:29:40.112806  475694 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112930  475694 command_runner.go:130] > # stream_tls_key = ""
	I1216 04:29:40.112940  475694 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 04:29:40.112947  475694 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 04:29:40.112956  475694 command_runner.go:130] > # automatically pick up the changes.
	I1216 04:29:40.113120  475694 command_runner.go:130] > # stream_tls_ca = ""
	I1216 04:29:40.113148  475694 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113407  475694 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 04:29:40.113455  475694 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113595  475694 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 04:29:40.113624  475694 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 04:29:40.113657  475694 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 04:29:40.113680  475694 command_runner.go:130] > [crio.runtime]
	I1216 04:29:40.113702  475694 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 04:29:40.113736  475694 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 04:29:40.113757  475694 command_runner.go:130] > # "nofile=1024:2048"
	I1216 04:29:40.113777  475694 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 04:29:40.113795  475694 command_runner.go:130] > # default_ulimits = [
	I1216 04:29:40.113822  475694 command_runner.go:130] > # ]
	I1216 04:29:40.113845  475694 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 04:29:40.113998  475694 command_runner.go:130] > # no_pivot = false
	I1216 04:29:40.114026  475694 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 04:29:40.114058  475694 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 04:29:40.114076  475694 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 04:29:40.114109  475694 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 04:29:40.114138  475694 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 04:29:40.114159  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114189  475694 command_runner.go:130] > # conmon = ""
	I1216 04:29:40.114211  475694 command_runner.go:130] > # Cgroup setting for conmon
	I1216 04:29:40.114233  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 04:29:40.114382  475694 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 04:29:40.114414  475694 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 04:29:40.114449  475694 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 04:29:40.114469  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114514  475694 command_runner.go:130] > # conmon_env = [
	I1216 04:29:40.114538  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114560  475694 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 04:29:40.114591  475694 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 04:29:40.114614  475694 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 04:29:40.114632  475694 command_runner.go:130] > # default_env = [
	I1216 04:29:40.114649  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114679  475694 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 04:29:40.114706  475694 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 04:29:40.114884  475694 command_runner.go:130] > # selinux = false
	I1216 04:29:40.114896  475694 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 04:29:40.114903  475694 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 04:29:40.114909  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114913  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.114950  475694 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 04:29:40.114969  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114984  475694 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 04:29:40.115020  475694 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 04:29:40.115046  475694 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 04:29:40.115055  475694 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 04:29:40.115062  475694 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 04:29:40.115067  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115072  475694 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 04:29:40.115077  475694 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 04:29:40.115116  475694 command_runner.go:130] > # the cgroup blockio controller.
	I1216 04:29:40.115133  475694 command_runner.go:130] > # blockio_config_file = ""
	I1216 04:29:40.115175  475694 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 04:29:40.115196  475694 command_runner.go:130] > # blockio parameters.
	I1216 04:29:40.115214  475694 command_runner.go:130] > # blockio_reload = false
	I1216 04:29:40.115235  475694 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 04:29:40.115262  475694 command_runner.go:130] > # irqbalance daemon.
	I1216 04:29:40.115417  475694 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 04:29:40.115505  475694 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 04:29:40.115615  475694 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 04:29:40.115655  475694 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 04:29:40.115678  475694 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 04:29:40.115698  475694 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 04:29:40.115716  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115745  475694 command_runner.go:130] > # rdt_config_file = ""
	I1216 04:29:40.115769  475694 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 04:29:40.115788  475694 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 04:29:40.115822  475694 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 04:29:40.115844  475694 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 04:29:40.115864  475694 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 04:29:40.115884  475694 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 04:29:40.115919  475694 command_runner.go:130] > # will be added.
	I1216 04:29:40.115936  475694 command_runner.go:130] > # default_capabilities = [
	I1216 04:29:40.115952  475694 command_runner.go:130] > # 	"CHOWN",
	I1216 04:29:40.115983  475694 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 04:29:40.116006  475694 command_runner.go:130] > # 	"FSETID",
	I1216 04:29:40.116024  475694 command_runner.go:130] > # 	"FOWNER",
	I1216 04:29:40.116040  475694 command_runner.go:130] > # 	"SETGID",
	I1216 04:29:40.116070  475694 command_runner.go:130] > # 	"SETUID",
	I1216 04:29:40.116112  475694 command_runner.go:130] > # 	"SETPCAP",
	I1216 04:29:40.116150  475694 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 04:29:40.116170  475694 command_runner.go:130] > # 	"KILL",
	I1216 04:29:40.116187  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116209  475694 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 04:29:40.116243  475694 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 04:29:40.116264  475694 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 04:29:40.116284  475694 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 04:29:40.116316  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116336  475694 command_runner.go:130] > default_sysctls = [
	I1216 04:29:40.116352  475694 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 04:29:40.116370  475694 command_runner.go:130] > ]
	I1216 04:29:40.116402  475694 command_runner.go:130] > # List of devices on the host that a
	I1216 04:29:40.116430  475694 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 04:29:40.116449  475694 command_runner.go:130] > # allowed_devices = [
	I1216 04:29:40.116482  475694 command_runner.go:130] > # 	"/dev/fuse",
	I1216 04:29:40.116502  475694 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 04:29:40.116519  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116549  475694 command_runner.go:130] > # List of additional devices. specified as
	I1216 04:29:40.116842  475694 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 04:29:40.116898  475694 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 04:29:40.116921  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116950  475694 command_runner.go:130] > # additional_devices = [
	I1216 04:29:40.116977  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116996  475694 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 04:29:40.117028  475694 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 04:29:40.117054  475694 command_runner.go:130] > # 	"/etc/cdi",
	I1216 04:29:40.117101  475694 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 04:29:40.117118  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117139  475694 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 04:29:40.117174  475694 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 04:29:40.117193  475694 command_runner.go:130] > # Defaults to false.
	I1216 04:29:40.117222  475694 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 04:29:40.117264  475694 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 04:29:40.117284  475694 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 04:29:40.117301  475694 command_runner.go:130] > # hooks_dir = [
	I1216 04:29:40.117338  475694 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 04:29:40.117357  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117377  475694 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 04:29:40.117412  475694 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 04:29:40.117421  475694 command_runner.go:130] > # its default mounts from the following two files:
	I1216 04:29:40.117425  475694 command_runner.go:130] > #
	I1216 04:29:40.117432  475694 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 04:29:40.117438  475694 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 04:29:40.117444  475694 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 04:29:40.117447  475694 command_runner.go:130] > #
	I1216 04:29:40.117454  475694 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 04:29:40.117461  475694 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 04:29:40.117467  475694 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 04:29:40.117517  475694 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 04:29:40.117534  475694 command_runner.go:130] > #
	I1216 04:29:40.117567  475694 command_runner.go:130] > # default_mounts_file = ""
	I1216 04:29:40.117599  475694 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 04:29:40.117644  475694 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 04:29:40.117670  475694 command_runner.go:130] > # pids_limit = -1
	I1216 04:29:40.117691  475694 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 04:29:40.117725  475694 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 04:29:40.117753  475694 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 04:29:40.117773  475694 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 04:29:40.117806  475694 command_runner.go:130] > # log_size_max = -1
	I1216 04:29:40.117830  475694 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 04:29:40.117850  475694 command_runner.go:130] > # log_to_journald = false
	I1216 04:29:40.117889  475694 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 04:29:40.117908  475694 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 04:29:40.117927  475694 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 04:29:40.117963  475694 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 04:29:40.117992  475694 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 04:29:40.118011  475694 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 04:29:40.118045  475694 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 04:29:40.118064  475694 command_runner.go:130] > # read_only = false
	I1216 04:29:40.118085  475694 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 04:29:40.118118  475694 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 04:29:40.118145  475694 command_runner.go:130] > # live configuration reload.
	I1216 04:29:40.118163  475694 command_runner.go:130] > # log_level = "info"
	I1216 04:29:40.118200  475694 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 04:29:40.118229  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.118246  475694 command_runner.go:130] > # log_filter = ""
	I1216 04:29:40.118284  475694 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118305  475694 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 04:29:40.118324  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118360  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118379  475694 command_runner.go:130] > # uid_mappings = ""
	I1216 04:29:40.118400  475694 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118433  475694 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 04:29:40.118453  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118475  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118516  475694 command_runner.go:130] > # gid_mappings = ""
	I1216 04:29:40.118547  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 04:29:40.118581  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118608  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.118630  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118663  475694 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 04:29:40.118694  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 04:29:40.118716  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118867  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.119059  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.119080  475694 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 04:29:40.119119  475694 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 04:29:40.119149  475694 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 04:29:40.119169  475694 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 04:29:40.119206  475694 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 04:29:40.119228  475694 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 04:29:40.119249  475694 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 04:29:40.119286  475694 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 04:29:40.119304  475694 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 04:29:40.119323  475694 command_runner.go:130] > # drop_infra_ctr = true
	I1216 04:29:40.119357  475694 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 04:29:40.119378  475694 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 04:29:40.119425  475694 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 04:29:40.119453  475694 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 04:29:40.119476  475694 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 04:29:40.119511  475694 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 04:29:40.119541  475694 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 04:29:40.119560  475694 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 04:29:40.119590  475694 command_runner.go:130] > # shared_cpuset = ""
	I1216 04:29:40.119612  475694 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 04:29:40.119632  475694 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 04:29:40.119663  475694 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 04:29:40.119695  475694 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 04:29:40.119739  475694 command_runner.go:130] > # pinns_path = ""
	I1216 04:29:40.119766  475694 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 04:29:40.119787  475694 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 04:29:40.119820  475694 command_runner.go:130] > # enable_criu_support = true
	I1216 04:29:40.119849  475694 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 04:29:40.119870  475694 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 04:29:40.119901  475694 command_runner.go:130] > # enable_pod_events = false
	I1216 04:29:40.119923  475694 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 04:29:40.119945  475694 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 04:29:40.119977  475694 command_runner.go:130] > # default_runtime = "crun"
	I1216 04:29:40.120005  475694 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 04:29:40.120029  475694 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 04:29:40.120074  475694 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 04:29:40.120094  475694 command_runner.go:130] > # creation as a file is not desired either.
	I1216 04:29:40.120134  475694 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 04:29:40.120162  475694 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 04:29:40.120182  475694 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 04:29:40.120216  475694 command_runner.go:130] > # ]
	I1216 04:29:40.120248  475694 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 04:29:40.120270  475694 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 04:29:40.120320  475694 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 04:29:40.120347  475694 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 04:29:40.120396  475694 command_runner.go:130] > #
	I1216 04:29:40.120416  475694 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 04:29:40.120435  475694 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 04:29:40.120469  475694 command_runner.go:130] > # runtime_type = "oci"
	I1216 04:29:40.120490  475694 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 04:29:40.120514  475694 command_runner.go:130] > # inherit_default_runtime = false
	I1216 04:29:40.120552  475694 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 04:29:40.120570  475694 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 04:29:40.120589  475694 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 04:29:40.120618  475694 command_runner.go:130] > # monitor_env = []
	I1216 04:29:40.120639  475694 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 04:29:40.120667  475694 command_runner.go:130] > # allowed_annotations = []
	I1216 04:29:40.120700  475694 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 04:29:40.120720  475694 command_runner.go:130] > # no_sync_log = false
	I1216 04:29:40.120739  475694 command_runner.go:130] > # default_annotations = {}
	I1216 04:29:40.120771  475694 command_runner.go:130] > # stream_websockets = false
	I1216 04:29:40.120795  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.120859  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.120892  475694 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 04:29:40.120926  475694 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 04:29:40.120956  475694 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 04:29:40.120976  475694 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 04:29:40.121008  475694 command_runner.go:130] > #   in $PATH.
	I1216 04:29:40.121038  475694 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 04:29:40.121057  475694 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 04:29:40.121115  475694 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 04:29:40.121133  475694 command_runner.go:130] > #   state.
	I1216 04:29:40.121155  475694 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 04:29:40.121189  475694 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 04:29:40.121228  475694 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 04:29:40.121250  475694 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 04:29:40.121270  475694 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 04:29:40.121300  475694 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 04:29:40.121328  475694 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 04:29:40.121349  475694 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 04:29:40.121370  475694 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 04:29:40.121404  475694 command_runner.go:130] > #   The currently recognized values are:
	I1216 04:29:40.121434  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 04:29:40.121457  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 04:29:40.121484  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 04:29:40.121518  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 04:29:40.121541  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 04:29:40.121564  475694 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 04:29:40.121592  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 04:29:40.121620  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 04:29:40.121640  475694 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 04:29:40.121671  475694 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 04:29:40.121692  475694 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 04:29:40.121712  475694 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 04:29:40.121747  475694 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 04:29:40.121775  475694 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 04:29:40.121796  475694 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 04:29:40.121818  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 04:29:40.121849  475694 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 04:29:40.121873  475694 command_runner.go:130] > #   deprecated option "conmon".
	I1216 04:29:40.121896  475694 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 04:29:40.121916  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 04:29:40.121945  475694 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 04:29:40.121969  475694 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 04:29:40.121989  475694 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 04:29:40.122009  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 04:29:40.122039  475694 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 04:29:40.122065  475694 command_runner.go:130] > #   conmon-rs by using:
	I1216 04:29:40.122085  475694 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 04:29:40.122108  475694 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 04:29:40.122138  475694 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 04:29:40.122166  475694 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 04:29:40.122184  475694 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 04:29:40.122204  475694 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 04:29:40.122236  475694 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 04:29:40.122262  475694 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 04:29:40.122285  475694 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 04:29:40.122332  475694 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 04:29:40.122360  475694 command_runner.go:130] > #   when a machine crash happens.
	I1216 04:29:40.122382  475694 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 04:29:40.122406  475694 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 04:29:40.122443  475694 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 04:29:40.122473  475694 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 04:29:40.122495  475694 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 04:29:40.122537  475694 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 04:29:40.122553  475694 command_runner.go:130] > #
	I1216 04:29:40.122572  475694 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 04:29:40.122589  475694 command_runner.go:130] > #
	I1216 04:29:40.122624  475694 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 04:29:40.122646  475694 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 04:29:40.122662  475694 command_runner.go:130] > #
	I1216 04:29:40.122693  475694 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 04:29:40.122721  475694 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 04:29:40.122737  475694 command_runner.go:130] > #
	I1216 04:29:40.122758  475694 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 04:29:40.122777  475694 command_runner.go:130] > # feature.
	I1216 04:29:40.122810  475694 command_runner.go:130] > #
	I1216 04:29:40.122842  475694 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 04:29:40.122863  475694 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 04:29:40.122893  475694 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 04:29:40.122913  475694 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 04:29:40.122933  475694 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 04:29:40.122960  475694 command_runner.go:130] > #
	I1216 04:29:40.122986  475694 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 04:29:40.123006  475694 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 04:29:40.123023  475694 command_runner.go:130] > #
	I1216 04:29:40.123043  475694 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 04:29:40.123079  475694 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 04:29:40.123096  475694 command_runner.go:130] > #
	I1216 04:29:40.123117  475694 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 04:29:40.123147  475694 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 04:29:40.123171  475694 command_runner.go:130] > # limitation.
	I1216 04:29:40.123187  475694 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 04:29:40.123204  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 04:29:40.123225  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123264  475694 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 04:29:40.123284  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123302  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123331  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123357  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123375  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123394  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123413  475694 command_runner.go:130] > allowed_annotations = [
	I1216 04:29:40.123445  475694 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 04:29:40.123463  475694 command_runner.go:130] > ]
	I1216 04:29:40.123482  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123501  475694 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 04:29:40.123534  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 04:29:40.123552  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123570  475694 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 04:29:40.123589  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123625  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123644  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123670  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123707  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123742  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123785  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123815  475694 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 04:29:40.123837  475694 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 04:29:40.123859  475694 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 04:29:40.123892  475694 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 04:29:40.123918  475694 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 04:29:40.123943  475694 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 04:29:40.123978  475694 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 04:29:40.123998  475694 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 04:29:40.124022  475694 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 04:29:40.124054  475694 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 04:29:40.124075  475694 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 04:29:40.124108  475694 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 04:29:40.124142  475694 command_runner.go:130] > # Example:
	I1216 04:29:40.124163  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 04:29:40.124183  475694 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 04:29:40.124217  475694 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 04:29:40.124245  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 04:29:40.124262  475694 command_runner.go:130] > # cpuset = "0-1"
	I1216 04:29:40.124279  475694 command_runner.go:130] > # cpushares = "5"
	I1216 04:29:40.124296  475694 command_runner.go:130] > # cpuquota = "1000"
	I1216 04:29:40.124329  475694 command_runner.go:130] > # cpuperiod = "100000"
	I1216 04:29:40.124347  475694 command_runner.go:130] > # cpulimit = "35"
	I1216 04:29:40.124367  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.124385  475694 command_runner.go:130] > # The workload name is workload-type.
	I1216 04:29:40.124421  475694 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 04:29:40.124440  475694 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 04:29:40.124460  475694 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 04:29:40.124492  475694 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 04:29:40.124517  475694 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 04:29:40.124536  475694 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 04:29:40.124556  475694 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 04:29:40.124575  475694 command_runner.go:130] > # Default value is set to true
	I1216 04:29:40.124610  475694 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 04:29:40.124630  475694 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 04:29:40.124649  475694 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 04:29:40.124667  475694 command_runner.go:130] > # Default value is set to 'false'
	I1216 04:29:40.124699  475694 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 04:29:40.124718  475694 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 04:29:40.124741  475694 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 04:29:40.124768  475694 command_runner.go:130] > # timezone = ""
	I1216 04:29:40.124795  475694 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 04:29:40.124810  475694 command_runner.go:130] > #
	I1216 04:29:40.124829  475694 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 04:29:40.124850  475694 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 04:29:40.124892  475694 command_runner.go:130] > [crio.image]
	I1216 04:29:40.124912  475694 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 04:29:40.124930  475694 command_runner.go:130] > # default_transport = "docker://"
	I1216 04:29:40.124959  475694 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 04:29:40.125019  475694 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125026  475694 command_runner.go:130] > # global_auth_file = ""
	I1216 04:29:40.125031  475694 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 04:29:40.125036  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125041  475694 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.125093  475694 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 04:29:40.125106  475694 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125111  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125121  475694 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 04:29:40.125127  475694 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 04:29:40.125133  475694 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 04:29:40.125139  475694 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 04:29:40.125145  475694 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 04:29:40.125160  475694 command_runner.go:130] > # pause_command = "/pause"
	I1216 04:29:40.125167  475694 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 04:29:40.125172  475694 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 04:29:40.125178  475694 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 04:29:40.125184  475694 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 04:29:40.125190  475694 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 04:29:40.125198  475694 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 04:29:40.125209  475694 command_runner.go:130] > # pinned_images = [
	I1216 04:29:40.125213  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125219  475694 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 04:29:40.125226  475694 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 04:29:40.125232  475694 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 04:29:40.125238  475694 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 04:29:40.125243  475694 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 04:29:40.125248  475694 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 04:29:40.125253  475694 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 04:29:40.125268  475694 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 04:29:40.125275  475694 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 04:29:40.125281  475694 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 04:29:40.125287  475694 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 04:29:40.125291  475694 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 04:29:40.125298  475694 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 04:29:40.125304  475694 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 04:29:40.125308  475694 command_runner.go:130] > # changing them here.
	I1216 04:29:40.125313  475694 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 04:29:40.125317  475694 command_runner.go:130] > # insecure_registries = [
	I1216 04:29:40.125325  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125331  475694 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 04:29:40.125338  475694 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 04:29:40.125343  475694 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 04:29:40.125348  475694 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 04:29:40.125352  475694 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 04:29:40.125358  475694 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 04:29:40.125365  475694 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 04:29:40.125369  475694 command_runner.go:130] > # auto_reload_registries = false
	I1216 04:29:40.125375  475694 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 04:29:40.125386  475694 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 04:29:40.125392  475694 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 04:29:40.125396  475694 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 04:29:40.125400  475694 command_runner.go:130] > # The mode of short name resolution.
	I1216 04:29:40.125406  475694 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 04:29:40.125414  475694 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 04:29:40.125419  475694 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 04:29:40.125422  475694 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 04:29:40.125428  475694 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 04:29:40.125435  475694 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 04:29:40.125439  475694 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 04:29:40.125445  475694 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 04:29:40.125449  475694 command_runner.go:130] > # CNI plugins.
	I1216 04:29:40.125456  475694 command_runner.go:130] > [crio.network]
	I1216 04:29:40.125462  475694 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 04:29:40.125467  475694 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 04:29:40.125471  475694 command_runner.go:130] > # cni_default_network = ""
	I1216 04:29:40.125476  475694 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 04:29:40.125481  475694 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 04:29:40.125487  475694 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 04:29:40.125498  475694 command_runner.go:130] > # plugin_dirs = [
	I1216 04:29:40.125501  475694 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 04:29:40.125504  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125508  475694 command_runner.go:130] > # List of included pod metrics.
	I1216 04:29:40.125512  475694 command_runner.go:130] > # included_pod_metrics = [
	I1216 04:29:40.125515  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125521  475694 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 04:29:40.125524  475694 command_runner.go:130] > [crio.metrics]
	I1216 04:29:40.125529  475694 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 04:29:40.125533  475694 command_runner.go:130] > # enable_metrics = false
	I1216 04:29:40.125537  475694 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 04:29:40.125542  475694 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 04:29:40.125549  475694 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 04:29:40.125557  475694 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 04:29:40.125564  475694 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 04:29:40.125568  475694 command_runner.go:130] > # metrics_collectors = [
	I1216 04:29:40.125572  475694 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 04:29:40.125576  475694 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 04:29:40.125580  475694 command_runner.go:130] > # 	"containers_oom_total",
	I1216 04:29:40.125584  475694 command_runner.go:130] > # 	"processes_defunct",
	I1216 04:29:40.125587  475694 command_runner.go:130] > # 	"operations_total",
	I1216 04:29:40.125591  475694 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 04:29:40.125596  475694 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 04:29:40.125600  475694 command_runner.go:130] > # 	"operations_errors_total",
	I1216 04:29:40.125604  475694 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 04:29:40.125608  475694 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 04:29:40.125615  475694 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 04:29:40.125619  475694 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 04:29:40.125623  475694 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 04:29:40.125627  475694 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 04:29:40.125632  475694 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 04:29:40.125636  475694 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 04:29:40.125640  475694 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 04:29:40.125643  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125649  475694 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 04:29:40.125653  475694 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 04:29:40.125658  475694 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 04:29:40.125662  475694 command_runner.go:130] > # metrics_port = 9090
	I1216 04:29:40.125667  475694 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 04:29:40.125670  475694 command_runner.go:130] > # metrics_socket = ""
	I1216 04:29:40.125678  475694 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 04:29:40.125684  475694 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 04:29:40.125690  475694 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 04:29:40.125694  475694 command_runner.go:130] > # certificate on any modification event.
	I1216 04:29:40.125698  475694 command_runner.go:130] > # metrics_cert = ""
	I1216 04:29:40.125703  475694 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 04:29:40.125708  475694 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 04:29:40.125711  475694 command_runner.go:130] > # metrics_key = ""
	I1216 04:29:40.125718  475694 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 04:29:40.125721  475694 command_runner.go:130] > [crio.tracing]
	I1216 04:29:40.125726  475694 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 04:29:40.125730  475694 command_runner.go:130] > # enable_tracing = false
	I1216 04:29:40.125735  475694 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 04:29:40.125740  475694 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 04:29:40.125747  475694 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 04:29:40.125753  475694 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 04:29:40.125757  475694 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 04:29:40.125760  475694 command_runner.go:130] > [crio.nri]
	I1216 04:29:40.125764  475694 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 04:29:40.125772  475694 command_runner.go:130] > # enable_nri = true
	I1216 04:29:40.125776  475694 command_runner.go:130] > # NRI socket to listen on.
	I1216 04:29:40.125781  475694 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 04:29:40.125785  475694 command_runner.go:130] > # NRI plugin directory to use.
	I1216 04:29:40.125789  475694 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 04:29:40.125794  475694 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 04:29:40.125799  475694 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 04:29:40.125804  475694 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 04:29:40.125861  475694 command_runner.go:130] > # nri_disable_connections = false
	I1216 04:29:40.125867  475694 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 04:29:40.125871  475694 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 04:29:40.125876  475694 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 04:29:40.125881  475694 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 04:29:40.125885  475694 command_runner.go:130] > # NRI default validator configuration.
	I1216 04:29:40.125892  475694 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 04:29:40.125898  475694 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 04:29:40.125902  475694 command_runner.go:130] > # can be restricted/rejected:
	I1216 04:29:40.125905  475694 command_runner.go:130] > # - OCI hook injection
	I1216 04:29:40.125910  475694 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 04:29:40.125915  475694 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 04:29:40.125919  475694 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 04:29:40.125923  475694 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 04:29:40.125929  475694 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 04:29:40.125936  475694 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 04:29:40.125941  475694 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 04:29:40.125944  475694 command_runner.go:130] > #
	I1216 04:29:40.125948  475694 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 04:29:40.125953  475694 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 04:29:40.125958  475694 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 04:29:40.125963  475694 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 04:29:40.125969  475694 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 04:29:40.125974  475694 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 04:29:40.125979  475694 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 04:29:40.125986  475694 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 04:29:40.125991  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125996  475694 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 04:29:40.126002  475694 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 04:29:40.126007  475694 command_runner.go:130] > [crio.stats]
	I1216 04:29:40.126013  475694 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 04:29:40.126018  475694 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 04:29:40.126022  475694 command_runner.go:130] > # stats_collection_period = 0
	I1216 04:29:40.126028  475694 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 04:29:40.126034  475694 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 04:29:40.126038  475694 command_runner.go:130] > # collection_period = 0
	I1216 04:29:40.126084  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086834829Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 04:29:40.126093  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086875912Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 04:29:40.126103  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086913837Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 04:29:40.126111  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086943031Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 04:29:40.126123  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087027733Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:40.126132  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087362399Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 04:29:40.126142  475694 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 04:29:40.126226  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:40.126235  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:40.126255  475694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:29:40.126277  475694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:29:40.126422  475694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:29:40.126497  475694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:29:40.134815  475694 command_runner.go:130] > kubeadm
	I1216 04:29:40.134839  475694 command_runner.go:130] > kubectl
	I1216 04:29:40.134844  475694 command_runner.go:130] > kubelet
	I1216 04:29:40.134872  475694 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:29:40.134932  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:29:40.143529  475694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:29:40.156375  475694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:29:40.169188  475694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 04:29:40.182223  475694 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:29:40.185968  475694 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:29:40.186105  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:40.327743  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:41.068736  475694 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:29:41.068757  475694 certs.go:195] generating shared ca certs ...
	I1216 04:29:41.068779  475694 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.069050  475694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:29:41.069145  475694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:29:41.069172  475694 certs.go:257] generating profile certs ...
	I1216 04:29:41.069366  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:29:41.069439  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:29:41.069492  475694 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:29:41.069508  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:29:41.069527  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:29:41.069550  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:29:41.069568  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:29:41.069598  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:29:41.069624  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:29:41.069636  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:29:41.069661  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:29:41.069722  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:29:41.069792  475694 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:29:41.069804  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:29:41.069832  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:29:41.069864  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:29:41.069933  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:29:41.070011  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:41.070050  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.070068  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.070082  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem -> /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.070740  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:29:41.088516  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:29:41.106273  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:29:41.124169  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:29:41.142346  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:29:41.160632  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:29:41.181690  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:29:41.199949  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:29:41.217789  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:29:41.237601  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:29:41.255073  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:29:41.272738  475694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:29:41.286149  475694 ssh_runner.go:195] Run: openssl version
	I1216 04:29:41.292023  475694 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:29:41.292477  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.299852  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:29:41.307795  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312150  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312182  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312250  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.353168  475694 command_runner.go:130] > 3ec20f2e
	I1216 04:29:41.353674  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:29:41.362516  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.370150  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:29:41.377841  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381956  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381986  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.382040  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.422880  475694 command_runner.go:130] > b5213941
	I1216 04:29:41.423347  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:29:41.430980  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.438640  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:29:41.446570  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450618  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450691  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450770  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.493534  475694 command_runner.go:130] > 51391683
	I1216 04:29:41.494044  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:29:41.501730  475694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505651  475694 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505723  475694 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:29:41.505736  475694 command_runner.go:130] > Device: 259,1	Inode: 1313043     Links: 1
	I1216 04:29:41.505744  475694 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:41.505751  475694 command_runner.go:130] > Access: 2025-12-16 04:25:32.918538317 +0000
	I1216 04:29:41.505756  475694 command_runner.go:130] > Modify: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505760  475694 command_runner.go:130] > Change: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505765  475694 command_runner.go:130] >  Birth: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505860  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:29:41.547026  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.547554  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:29:41.588926  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.589431  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:29:41.630503  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.630976  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:29:41.679374  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.679872  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:29:41.720872  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.720962  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:29:41.763843  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.764306  475694 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:41.764397  475694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:29:41.764473  475694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:29:41.794813  475694 cri.go:89] found id: ""
	I1216 04:29:41.795018  475694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:29:41.802238  475694 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:29:41.802260  475694 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:29:41.802267  475694 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:29:41.803148  475694 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:29:41.803169  475694 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:29:41.803241  475694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:29:41.810442  475694 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:29:41.810892  475694 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-763073" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811005  475694 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-438353/kubeconfig needs updating (will repair): [kubeconfig missing "functional-763073" cluster setting kubeconfig missing "functional-763073" context setting]
	I1216 04:29:41.811272  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.811696  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811844  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.812430  475694 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:29:41.812449  475694 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:29:41.812455  475694 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:29:41.812459  475694 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:29:41.812464  475694 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:29:41.812504  475694 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:29:41.812753  475694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:29:41.827245  475694 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 04:29:41.827324  475694 kubeadm.go:602] duration metric: took 24.148626ms to restartPrimaryControlPlane
	I1216 04:29:41.827348  475694 kubeadm.go:403] duration metric: took 63.050551ms to StartCluster
	I1216 04:29:41.827392  475694 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.827497  475694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.828225  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.828522  475694 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:29:41.828868  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:41.828926  475694 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:29:41.829003  475694 addons.go:70] Setting storage-provisioner=true in profile "functional-763073"
	I1216 04:29:41.829025  475694 addons.go:239] Setting addon storage-provisioner=true in "functional-763073"
	I1216 04:29:41.829051  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.829717  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.829866  475694 addons.go:70] Setting default-storageclass=true in profile "functional-763073"
	I1216 04:29:41.829889  475694 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-763073"
	I1216 04:29:41.830179  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.835425  475694 out.go:179] * Verifying Kubernetes components...
	I1216 04:29:41.843204  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:41.852282  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.852487  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.852847  475694 addons.go:239] Setting addon default-storageclass=true in "functional-763073"
	I1216 04:29:41.852883  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.853441  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.902066  475694 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:29:41.905129  475694 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:41.905181  475694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:29:41.905276  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.908977  475694 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:41.909002  475694 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:29:41.909132  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.960105  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:41.975058  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:42.043859  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:42.092471  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:42.106008  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:42.818195  475694 node_ready.go:35] waiting up to 6m0s for node "functional-763073" to be "Ready" ...
	I1216 04:29:42.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:29:42.818432  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:42.818659  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818682  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818701  475694 retry.go:31] will retry after 327.643243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818740  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818752  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818759  475694 retry.go:31] will retry after 171.339125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:42.990327  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.052462  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.052555  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.052597  475694 retry.go:31] will retry after 320.089446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.146742  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.207665  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.212209  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.212243  475694 retry.go:31] will retry after 291.464307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.318472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.318814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.373308  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.435189  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.435254  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.435280  475694 retry.go:31] will retry after 781.758867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.504448  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.571334  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.571371  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.571390  475694 retry.go:31] will retry after 332.937553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.818906  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.818991  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.819297  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.904706  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.962384  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.966307  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.966396  475694 retry.go:31] will retry after 1.136896719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.217759  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:44.279618  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:44.283381  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.283415  475694 retry.go:31] will retry after 1.1051557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.318552  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.318673  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.319015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:44.818498  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.818571  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:44.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:45.103534  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:45.194787  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.195010  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.195099  475694 retry.go:31] will retry after 1.211699823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.319146  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.319235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.319562  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:45.388763  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:45.456804  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.456849  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.456877  475694 retry.go:31] will retry after 720.865488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.819295  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.819381  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.819670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.178239  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:46.241684  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.241730  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.241750  475694 retry.go:31] will retry after 2.398929444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.318930  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.319008  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.319303  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.407630  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:46.476894  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.476941  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.476959  475694 retry.go:31] will retry after 1.300502308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.818702  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.819124  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:46.819187  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:47.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.318594  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.778651  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:47.819040  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.819112  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.836852  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:47.840282  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:47.840312  475694 retry.go:31] will retry after 3.994114703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.318482  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.318555  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:48.641498  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:48.705855  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:48.705903  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.705923  475694 retry.go:31] will retry after 1.757515206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.819100  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.819185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.819457  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:48.819514  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:49.319285  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.319697  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:49.819385  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.318415  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.318509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.464331  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:50.523255  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:50.523310  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.523330  475694 retry.go:31] will retry after 5.029530817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.318457  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:51.318895  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:51.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.819120  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.834846  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:51.906733  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:51.906789  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:51.906807  475694 retry.go:31] will retry after 4.132534587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:52.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.319782  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:52.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.818481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.818820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.318781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.818436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:53.818768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:54.318470  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.318855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:54.818416  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.818791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.318563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.318906  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.553265  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:55.626702  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:55.630832  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.630867  475694 retry.go:31] will retry after 7.132223529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.819263  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.819349  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.819703  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:55.819756  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:56.040181  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:56.104678  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:56.104716  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.104735  475694 retry.go:31] will retry after 8.857583825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.319119  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.319453  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:56.819390  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.819466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.819757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.319466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.818722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:58.319396  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.319513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.319927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:58.319980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:58.818648  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.818727  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.318403  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.818568  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.318660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.818779  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.818900  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.819255  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:00.819314  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:01.318812  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.318904  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.319269  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:01.818988  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.819066  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.819335  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.319195  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.319286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.763349  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:02.818891  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.818969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.819274  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.830785  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:02.830835  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:02.830855  475694 retry.go:31] will retry after 11.115111011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:03.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.318492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:03.318795  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:03.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.818567  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.318356  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.318791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.819354  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.819425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.963132  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:05.030528  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:05.030573  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.030594  475694 retry.go:31] will retry after 13.807129774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.319025  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.319109  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.319430  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:05.319487  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:05.819077  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.819160  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.819454  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.319216  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.319298  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.319561  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.818566  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.818640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.818960  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.319006  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.319080  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.319410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.819153  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.819235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.819526  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:07.819580  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:08.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.319439  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.319857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:08.818460  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.318512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.318769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.818489  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.818572  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:10.318546  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.319011  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:10.319072  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:10.818626  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.819016  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.818916  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.818993  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.819322  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:12.319122  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.319197  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.319465  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:12.319515  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:12.819218  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.819289  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.819619  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.318424  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.318745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.946231  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:14.010550  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:14.014827  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.014869  475694 retry.go:31] will retry after 8.112010712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.319336  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.319410  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.319731  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:14.319784  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:14.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.818426  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.818781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.319376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.319444  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.319700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.818487  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:16.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.319765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:16.319823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:16.818746  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.818828  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.819089  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.318878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.818576  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.818652  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.818985  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.318670  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.319008  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:18.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:18.838055  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:18.893739  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:18.897596  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:18.897631  475694 retry.go:31] will retry after 11.366080685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:19.319301  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.319380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.319681  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:19.819376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.819724  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.818403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:21.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:21.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:21.818866  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.818958  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.819324  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.127748  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:22.189082  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:22.189129  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.189148  475694 retry.go:31] will retry after 27.844564007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.319433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.818358  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.818435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.818698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:23.319415  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.319492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.319809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:23.319865  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:23.818531  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.818610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.818962  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.318495  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.318564  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.318816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.818435  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.818856  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.318545  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.318628  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.318920  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.818420  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:25.818900  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:26.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.318905  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:26.818764  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.819183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.318950  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.319026  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.319288  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.819187  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.819262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.819610  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:27.819670  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:28.319414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.319507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.319802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:28.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.818505  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.818767  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.318476  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.318919  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.818707  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.819030  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:30.264789  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:30.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:30.329449  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:30.329484  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.329503  475694 retry.go:31] will retry after 18.349811318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.819293  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.318473  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.318550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.318884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.818872  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.818940  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:32.319072  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.319152  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.319497  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:32.319550  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:32.819264  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.319325  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.319391  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.319698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.318569  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.318644  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.318965  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:34.819051  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:35.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:35.818450  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.818528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.318610  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.318679  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.318948  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.818786  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.818871  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.819206  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:36.819259  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:37.318997  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.319078  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.319374  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:37.819133  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.819207  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.819482  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.319323  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.319397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.319736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.818843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:39.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.318474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.318729  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:39.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:39.818457  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.318693  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.319014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.818414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.818482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:41.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.318542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:41.318917  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:41.819023  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.819096  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.319088  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.319177  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.319455  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.819310  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.819387  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:43.818851  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:44.318448  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.318869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:44.818424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.818501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.318828  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.318911  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.319336  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.819228  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.819306  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.819658  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:45.819718  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:46.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:46.818660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.318699  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.318774  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.319086  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.818830  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:48.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:48.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:48.679520  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:48.741510  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:48.741587  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.741616  475694 retry.go:31] will retry after 29.090794722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.818706  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.818780  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.819102  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.318810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.818809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:50.034416  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:50.096674  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:50.100468  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.100502  475694 retry.go:31] will retry after 39.426681546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.318852  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.318933  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.319214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:50.319264  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:50.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.819159  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.819546  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.319385  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.319643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.818732  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.819127  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.318819  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.318894  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.319218  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.818982  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.819057  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:52.819370  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:53.319110  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.319188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.319511  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:53.819108  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.819188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.819533  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.319331  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.319714  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.818392  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.818470  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.818795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:55.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:55.318874  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:55.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.818691  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.818767  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.819103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.318465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.819813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:57.819868  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:58.318364  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:58.819413  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.819488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.318514  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.818497  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.818583  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.818942  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:00.327892  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.327986  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.328316  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:00.328364  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:00.819096  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.819170  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.319773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.818911  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.819294  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.319118  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.319418  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.819093  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.819166  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.819505  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:02.819563  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:03.319107  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.319185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.319442  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:03.819184  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.819264  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.819590  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.319286  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.319688  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.818746  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:05.318450  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.318528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.318837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:05.318887  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:05.818417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.818534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.318784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.818697  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.818768  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.819055  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:07.319228  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.319300  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.319611  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:07.319663  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:07.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.318858  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.818509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.318534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.318615  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:09.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:10.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:10.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.818634  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.818898  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.818867  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.818943  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.819292  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:11.819345  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:12.319080  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.319411  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:12.819163  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.819597  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.319410  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.319484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.818534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.818607  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:14.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.318819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:14.318867  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:14.818523  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.318504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.818863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.318822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.818718  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.818992  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:16.819042  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:17.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.818831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.833208  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:31:17.902395  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906323  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906439  475694 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:18.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:18.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:19.318591  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.319009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:19.319064  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:19.818355  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.818687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.319419  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.319793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.318502  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.318570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.819058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.819153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.819506  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:21.819565  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:22.319388  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.319835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:22.819358  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.819430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.318411  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:24.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.318789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:24.318840  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:24.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.818448  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.818741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.818645  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.818926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.318381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.318457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.318786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.818755  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.818868  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.819189  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:26.819243  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:27.318982  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.319054  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.319361  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:27.819127  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.819233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.319236  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.819411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:28.819786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:29.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:29.528240  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:31:29.598877  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598918  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598995  475694 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:29.602136  475694 out.go:179] * Enabled addons: 
	I1216 04:31:29.604114  475694 addons.go:530] duration metric: took 1m47.775177414s for enable addons: enabled=[]
	I1216 04:31:29.818770  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.818886  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.819272  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.319147  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.319404  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.819315  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.819674  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:31.318340  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.318743  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:31.318800  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:31.818902  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.819227  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.319058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.319135  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.319508  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.819330  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.819408  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:33.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:33.318863  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:33.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.818785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.318363  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.318438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.318790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.819369  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.819438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.819713  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:35.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.318872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:35.318943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:35.818615  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.818692  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.819009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.818925  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.819003  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:37.319361  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.319459  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.319790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:37.319835  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:37.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.818525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.318529  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.318609  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.318895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:39.818858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:40.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.318507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:40.818376  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.818450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.818707  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.318824  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.319203  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.819296  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.819635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:41.819695  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:42.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.319800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:42.818819  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.818916  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.819270  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.319056  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.319132  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.319459  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.819240  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.819310  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.819650  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:44.319420  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.319840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:44.319896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:44.818558  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.818637  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.818980  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.318674  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.319042  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.818761  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.818837  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.819095  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:46.819145  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:47.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.318515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.318857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:47.818554  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.818627  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.818943  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.318744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.818844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:49.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.318533  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:49.318926  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:49.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.818832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.318907  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.818617  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.818699  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.819034  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:51.318728  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.318799  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.319084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:51.319133  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:51.819260  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.819337  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.819646  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.319367  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.319460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.319796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.818735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.818542  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:53.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:54.318422  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.318498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:54.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.318417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.318540  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:56.318390  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.318813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:56.318866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:56.818718  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.818805  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.819146  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.318413  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.318738  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.818490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:58.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.319808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:58.319866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:58.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.818811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.818868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.318393  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:00.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:01.318351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.318435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:01.818924  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.819034  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.819306  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.319090  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.819160  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.819573  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:02.819634  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.318419  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:03.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.818850  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.818766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:05.318488  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.318585  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.318952  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:05.319013  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:05.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.818766  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.318837  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.318913  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.319181  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.819150  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.819232  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.819586  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:07.319256  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.319343  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.319687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:07.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:07.819375  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.819717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.818488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.818845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.318495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.818410  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.818492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.818839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:09.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:10.318578  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.318664  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.319047  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:10.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.818852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.819114  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.819011  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.819097  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.819452  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:11.819512  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:12.319053  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.319128  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.319419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:12.819173  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.819252  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.819584  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.319194  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.319275  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.319589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.819219  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.819286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.819552  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:13.819595  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:14.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.319816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:14.818518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.818951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:16.318368  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.318450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:16.318842  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:16.818645  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.818981  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.318766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.818482  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:18.318566  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.318640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.318945  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:18.319006  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:18.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.818516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.818842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.318516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.819341  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.819415  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.819722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.318801  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.818494  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:20.818924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:21.318563  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.318632  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.318896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:21.818868  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.818945  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.819262  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.318832  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.318939  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.319249  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.818805  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.818880  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.819174  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:22.819224  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:23.318762  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.318839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.319185  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:23.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.819074  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.819390  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.319208  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.319468  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.819344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:24.819813  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:25.318436  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.318844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:25.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.818705  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.818789  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.819111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:27.318783  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.318852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.319112  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:27.319155  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:27.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.818848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.318875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.818822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.318518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.318617  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:29.819143  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:30.318804  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.318881  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:30.818908  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.819365  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.319128  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.319211  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.319551  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.818625  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.819005  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:32.318377  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.318779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:32.318830  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:32.818478  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.818890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.818487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:34.318540  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.318621  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.318936  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:34.318997  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:34.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.818510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.818779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.818530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.818878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:36.318556  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.318986  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:36.319033  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:36.818822  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.818905  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.819233  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.319068  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.319154  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.319493  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.819197  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.819270  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.819602  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:38.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.319452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:38.319827  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:38.818447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.818527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.318937  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.818731  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.819050  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.318767  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.319183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.818948  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.819022  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.819278  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:40.819323  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:41.319042  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.319117  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.319435  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:41.818623  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.818705  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.819037  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.818437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:43.318459  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.318887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:43.318945  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:43.819357  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.819431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.819742  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.318455  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.818656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:45.318689  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.318765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:45.319110  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:45.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.318349  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.818690  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.818773  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.819032  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.818472  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.818551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:47.818986  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:48.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.319715  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:48.818461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.319434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.819351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.819434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.819700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:49.819743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:50.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.318483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:50.818463  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.818546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.318426  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.318508  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.818955  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.819039  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.819431  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:52.319209  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.319637  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:52.319692  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:52.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.818449  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.818711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.318405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:54.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.319718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:54.319768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:54.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.818896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.318601  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.318680  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.818723  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.818804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.819074  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.318436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.818730  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.818807  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.819167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:56.819227  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:57.318894  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.318969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.319232  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:57.818968  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.819399  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.319214  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.319634  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.819672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:58.819714  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:59.318342  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.318420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:59.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.818911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.319047  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.319356  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.819156  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.819244  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.819576  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:01.319425  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.319520  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.319865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:01.319922  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:01.818853  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.818926  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.319032  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.319108  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.319434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.819246  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.819327  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.319320  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.319661  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.818365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.818761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:03.818823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:04.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.318596  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.318928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:04.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.818807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:05.818960  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:06.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.318787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:06.818785  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.818857  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.819145  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.318817  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.318891  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.818978  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.819056  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.819319  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:07.819368  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:08.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.319217  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.319580  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:08.819296  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:10.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:10.318924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:10.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.818479  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.818769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.318449  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.819090  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:12.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.319221  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.319548  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:12.319601  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:12.819365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.819440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.819754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.318386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.318466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.819148  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.819223  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.819475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:14.319233  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.319642  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:14.319694  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:14.819321  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.819398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.819744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.318773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.318633  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.318712  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.818735  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.819070  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:16.819115  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:17.318776  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.318851  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.319191  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:17.818960  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.819386  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.319157  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.819267  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.819339  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.819652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:18.819699  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:19.319379  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.319785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:19.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.818428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.318802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.818454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.818885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:21.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.318772  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:21.318812  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:21.818938  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.819385  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.319177  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.319262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.319560  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.819291  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.819372  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.819640  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:23.319349  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.319428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.319751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:23.319801  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:23.818459  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.818792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.318944  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.818409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.818745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:25.818786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:26.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:26.818684  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.818758  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.318750  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.318819  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.319109  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.818989  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.819067  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.819405  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:27.819467  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:28.319223  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.319304  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.319635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:28.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.319818  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.818393  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.818474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:30.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.318409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.318735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:30.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.818923  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.818850  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.818935  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:32.319014  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.319087  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.319396  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:32.319454  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:32.819204  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.819281  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.318343  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.318673  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.818346  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.318496  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.318588  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.318954  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.818515  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.818592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.818900  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:34.818954  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:35.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.318547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:35.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.319305  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.319382  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.818685  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.819006  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:36.819059  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:37.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.319017  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:37.819328  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.819394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.819638  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.818700  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.819026  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:38.819078  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:39.318710  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.318778  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.319044  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:39.819409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.819485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.819829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.818412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.818486  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:41.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:41.318906  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:41.819057  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.319351  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.319425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.319803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.818567  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.818642  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.818971  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:43.318717  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.318804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:43.319246  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:43.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.819063  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.318768  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.819022  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.819099  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.819428  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:45.319171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.319254  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.319544  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:45.319590  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:45.819402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.819848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.318583  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.319025  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.818846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.819141  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.318533  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.318611  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.318979  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:47.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:48.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.318751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:48.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.318697  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.319111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.818787  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.818863  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.819129  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:49.819172  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:50.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:50.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.818682  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.819022  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.318712  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.319167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.819070  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.819144  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.819478  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:51.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:52.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.319323  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.319652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:52.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.819441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.318783  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.818549  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:54.319385  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:54.319744  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:54.818347  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.818422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.818747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.318483  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.318963  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.818650  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.818724  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.819014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.318842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.818765  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.818843  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:56.819280  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:57.318987  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.319070  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.319350  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:57.819171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.819249  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.319778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.819329  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.819413  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:58.819797  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.318517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:59.818440  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.818866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.328767  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.328849  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.329179  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.819012  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.819093  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.819419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:01.319182  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.319271  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.319631  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:01.319685  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:01.818692  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.818765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.819031  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.318365  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.318443  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.818471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.818800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.318678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.818431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:03.818824  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:04.319347  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.319423  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:04.818540  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.818608  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.818855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.318534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.318911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.818570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.818899  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:05.818957  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:06.319348  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.319422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.319689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:06.818777  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.818855  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.819214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.319101  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.319438  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.819176  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.819248  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.819494  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:07.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:08.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.319324  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.319660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:08.819334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.819414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.819748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:10.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.318882  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:10.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:10.819332  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.819407  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.318447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.318755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.819077  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.819410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:12.319155  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.319475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:12.319519  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:12.819271  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.819346  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.819689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.318793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.818559  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.818826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.318885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.818639  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.818968  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:14.819021  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:15.318668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.319003  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:15.818382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.318521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.818753  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.818825  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.819126  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:16.819186  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:17.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.318558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:17.818418  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.818784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.818343  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.818802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:19.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:19.318915  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:19.818577  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.818646  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.818927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.318833  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:21.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.319702  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:21.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:21.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.819068  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.819437  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.319208  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.319613  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.819318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.819390  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.819643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.318762  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:23.818927  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:24.318334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.318402  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.318670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:24.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.818790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.318379  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.318455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.818514  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.818579  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:26.318398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:26.318858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:26.818668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.818748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.819069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.319709  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.818413  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:28.318554  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.318951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:28.319002  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:28.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.818493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.818750  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.319469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.319795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.818453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.818867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:30.319342  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.319416  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:30.319711  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:30.818394  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.818849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.818933  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.819001  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.819258  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.319093  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.819401  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:32.819825  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:33.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:33.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.818536  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.318539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.318890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.818406  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:35.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.318826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:35.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:35.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.818828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.318761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.818896  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.819296  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:37.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.318532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.318915  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:37.318974  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:37.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.818687  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.818946  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.318430  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.818653  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.818976  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:39.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.319717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:39.319766  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:39.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.819802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.319399  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.319478  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.319815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.819720  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.318432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.818913  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.818984  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.819332  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:41.819390  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:42.319148  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.319222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.319522  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:42.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.819397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.819739  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.319081  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.818751  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:44.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:44.318860  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:44.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.818518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.818902  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.319407  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.319489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.319845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.818371  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.818447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:46.318536  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.318974  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:46.319036  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:46.818922  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.819000  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.819277  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.319079  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.319486  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.819266  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:48.319327  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.319723  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:48.319773  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:48.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.818771  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.318493  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.318566  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.818551  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.318400  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.818522  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.818600  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.818928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:50.818980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:51.318625  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.318702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.319079  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:51.819046  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.819123  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.319344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.319417  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.319779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.818421  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.818829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:53.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:53.318897  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:53.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.818506  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.319276  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.319352  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.319592  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.819372  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.819794  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.318383  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.318468  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.818467  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.818798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:55.818839  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:56.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.318799  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:56.818695  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.818770  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.819054  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.318729  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.318810  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.319103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:57.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:58.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:58.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.818756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.318870  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.818542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.818859  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:59.818914  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:00.326681  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.327158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.327589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:00.818334  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.818414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.318487  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.318573  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.818952  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.819285  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:01.819326  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:02.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.319233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.319559  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:02.819407  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.819477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.819810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.318360  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.318434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.318682  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.818556  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.818922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:04.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:04.318896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:04.818557  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.818626  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.818950  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.818589  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.818665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.818795  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.818876  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.819216  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:06.819271  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:07.319075  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.319158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.319501  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:07.819216  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.819290  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.819547  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.319297  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.319373  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.319684  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.819455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.819785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:08.819836  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:09.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:09.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.318414  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.318493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.318815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.818498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.818758  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:11.318468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.318548  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:11.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:11.818967  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.819040  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.819370  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.318986  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.319065  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.319377  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.819142  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.819222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.819598  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:13.319413  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.319864  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:13.319929  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:13.818336  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.818409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.818500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.818819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.318797  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:15.818967  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:16.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.318843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:16.818768  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.819094  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.818465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:18.318479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.318546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:18.318849  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:18.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.818776  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.318510  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.318592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.818621  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.818973  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:20.318397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:20.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:20.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.818507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.318544  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.318656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.819053  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.819131  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.819472  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:22.319274  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.319345  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.319672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:22.319728  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:22.818396  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.818467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.318836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.818345  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.818420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.818765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.319370  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.319441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.818553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:24.818962  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:25.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.318430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:25.819340  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.819694  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.319409  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.319480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.319786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.818711  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:26.819158  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:27.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.318803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:27.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.818881  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.318434  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.318510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.318832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.818494  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.818812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:29.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:29.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:29.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.818455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.319321  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.319394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.818400  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.818475  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.818821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:31.318538  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.318610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.318926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:31.318982  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:31.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.819402  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.319162  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.319242  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.319568  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.819397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.819471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.819805  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.318749  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.818824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:33.818882  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:34.318388  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.318473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.318868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:34.819425  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.819500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.819756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.318883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.818457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:36.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.319450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.319711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:36.319751  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:36.818719  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.818823  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.819149  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.318863  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.318957  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.319340  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.819103  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.819178  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.819440  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.318528  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.318602  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.318927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:38.818930  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:39.318332  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.318414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.318736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:39.818477  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.818480  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.818825  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:41.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.318879  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:41.318931  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:41.818408  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.319418  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.319504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:42.319849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.818432  475694 node_ready.go:38] duration metric: took 6m0.000197669s for node "functional-763073" to be "Ready" ...
	I1216 04:35:42.821511  475694 out.go:203] 
	W1216 04:35:42.824400  475694 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:35:42.824420  475694 out.go:285] * 
	* 
	W1216 04:35:42.826578  475694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:35:42.829442  475694 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-763073 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.663220833s for "functional-763073" cluster.
I1216 04:35:43.415857  441727 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (334.638199ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 logs -n 25: (1.058063629s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-861171 addons list                                                                                                                     │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ addons         │ functional-861171 addons list -o json                                                                                                             │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ service        │ functional-861171 service hello-node-connect --url                                                                                                │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ start          │ -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ start          │ -p functional-861171 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                   │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ start          │ -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-861171 --alsologtostderr -v=1                                                                                    │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service list                                                                                                                    │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service list -o json                                                                                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service --namespace=default --https --url hello-node                                                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service hello-node --url --format={{.IP}}                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service hello-node --url                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format short --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format yaml --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ ssh            │ functional-861171 ssh pgrep buildkitd                                                                                                             │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ image          │ functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format json --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format table --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls                                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ delete         │ -p functional-861171                                                                                                                              │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ start          │ -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ start          │ -p functional-763073 --alsologtostderr -v=8                                                                                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:29:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:29:36.794313  475694 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:29:36.794434  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794446  475694 out.go:374] Setting ErrFile to fd 2...
	I1216 04:29:36.794452  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794700  475694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:29:36.795091  475694 out.go:368] Setting JSON to false
	I1216 04:29:36.795948  475694 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11523,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:29:36.796022  475694 start.go:143] virtualization:  
	I1216 04:29:36.799564  475694 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:29:36.803377  475694 notify.go:221] Checking for updates...
	I1216 04:29:36.806471  475694 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:29:36.809418  475694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:29:36.812382  475694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:36.815368  475694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:29:36.818384  475694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:29:36.821299  475694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:29:36.824780  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:36.824898  475694 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:29:36.853440  475694 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:29:36.853553  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.911081  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.901976085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.911198  475694 docker.go:319] overlay module found
	I1216 04:29:36.914378  475694 out.go:179] * Using the docker driver based on existing profile
	I1216 04:29:36.917157  475694 start.go:309] selected driver: docker
	I1216 04:29:36.917180  475694 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.917338  475694 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:29:36.917450  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.970986  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.961820507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.971442  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:36.971503  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:36.971553  475694 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.974751  475694 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:29:36.977516  475694 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:29:36.980431  475694 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:29:36.983493  475694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:29:36.983530  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:36.983585  475694 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:29:36.983595  475694 cache.go:65] Caching tarball of preloaded images
	I1216 04:29:36.983676  475694 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:29:36.983683  475694 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:29:36.983782  475694 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:29:37.009018  475694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:29:37.009047  475694 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:29:37.009096  475694 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:29:37.009136  475694 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:29:37.009247  475694 start.go:364] duration metric: took 82.708µs to acquireMachinesLock for "functional-763073"
	I1216 04:29:37.009287  475694 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:29:37.009293  475694 fix.go:54] fixHost starting: 
	I1216 04:29:37.009582  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:37.028726  475694 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:29:37.028764  475694 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:29:37.032201  475694 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:29:37.032251  475694 machine.go:94] provisionDockerMachine start ...
	I1216 04:29:37.032362  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.050328  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.050673  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.050689  475694 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:29:37.192783  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.192826  475694 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:29:37.192931  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.211313  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.211628  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.211639  475694 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:29:37.354192  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.354269  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.376898  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.377254  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.377278  475694 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:29:37.509279  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:29:37.509306  475694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:29:37.509326  475694 ubuntu.go:190] setting up certificates
	I1216 04:29:37.509346  475694 provision.go:84] configureAuth start
	I1216 04:29:37.509406  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:37.527206  475694 provision.go:143] copyHostCerts
	I1216 04:29:37.527264  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527308  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:29:37.527320  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527395  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:29:37.527487  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527509  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:29:37.527517  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527545  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:29:37.527594  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527615  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:29:37.527622  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527648  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:29:37.527699  475694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:29:37.800879  475694 provision.go:177] copyRemoteCerts
	I1216 04:29:37.800949  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:29:37.800990  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.823288  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:37.920869  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 04:29:37.920929  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:29:37.938521  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 04:29:37.938583  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:29:37.956377  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 04:29:37.956439  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:29:37.974119  475694 provision.go:87] duration metric: took 464.750518ms to configureAuth
	I1216 04:29:37.974148  475694 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:29:37.974331  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:37.974450  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.991914  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.992233  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.992254  475694 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:29:38.308392  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:29:38.308467  475694 machine.go:97] duration metric: took 1.27620546s to provisionDockerMachine
	I1216 04:29:38.308501  475694 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:29:38.308543  475694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:29:38.308636  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:29:38.308736  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.327973  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.425975  475694 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:29:38.429465  475694 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:29:38.429486  475694 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:29:38.429491  475694 command_runner.go:130] > VERSION_ID="12"
	I1216 04:29:38.429495  475694 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:29:38.429500  475694 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:29:38.429503  475694 command_runner.go:130] > ID=debian
	I1216 04:29:38.429508  475694 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:29:38.429575  475694 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:29:38.429584  475694 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:29:38.429642  475694 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:29:38.429664  475694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:29:38.429675  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:29:38.429740  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:29:38.429824  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:29:38.429840  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /etc/ssl/certs/4417272.pem
	I1216 04:29:38.429918  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:29:38.429926  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> /etc/test/nested/copy/441727/hosts
	I1216 04:29:38.429973  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:29:38.438164  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:38.456472  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:29:38.474815  475694 start.go:296] duration metric: took 166.27897ms for postStartSetup
	I1216 04:29:38.474942  475694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:29:38.475008  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.493257  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.586186  475694 command_runner.go:130] > 13%
	I1216 04:29:38.586744  475694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:29:38.591214  475694 command_runner.go:130] > 169G
	I1216 04:29:38.591631  475694 fix.go:56] duration metric: took 1.582334669s for fixHost
	I1216 04:29:38.591655  475694 start.go:83] releasing machines lock for "functional-763073", held for 1.582392532s
	I1216 04:29:38.591756  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:38.610497  475694 ssh_runner.go:195] Run: cat /version.json
	I1216 04:29:38.610580  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.610804  475694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:29:38.610862  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.644780  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.648235  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.740654  475694 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 04:29:38.740792  475694 ssh_runner.go:195] Run: systemctl --version
	I1216 04:29:38.835621  475694 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 04:29:38.838633  475694 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:29:38.838716  475694 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:29:38.838811  475694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:29:38.876422  475694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:29:38.880827  475694 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:29:38.881001  475694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:29:38.881102  475694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:29:38.888966  475694 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:29:38.888992  475694 start.go:496] detecting cgroup driver to use...
	I1216 04:29:38.889023  475694 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:29:38.889116  475694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:29:38.904919  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:29:38.918230  475694 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:29:38.918296  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:29:38.934386  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:29:38.947903  475694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:29:39.064725  475694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:29:39.186461  475694 docker.go:234] disabling docker service ...
	I1216 04:29:39.186555  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:29:39.201259  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:29:39.214213  475694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:29:39.331697  475694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:29:39.468929  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:29:39.481743  475694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:29:39.494008  475694 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 04:29:39.494807  475694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:29:39.494889  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.503668  475694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:29:39.503751  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.513027  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.521738  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.530476  475694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:29:39.538796  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.547730  475694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.556341  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.565046  475694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:29:39.571643  475694 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:29:39.572565  475694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:29:39.579896  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:39.695396  475694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:29:39.852818  475694 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:29:39.852930  475694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:29:39.856967  475694 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 04:29:39.856989  475694 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:29:39.856996  475694 command_runner.go:130] > Device: 0,72	Inode: 1641        Links: 1
	I1216 04:29:39.857013  475694 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:39.857019  475694 command_runner.go:130] > Access: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857028  475694 command_runner.go:130] > Modify: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857036  475694 command_runner.go:130] > Change: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857040  475694 command_runner.go:130] >  Birth: -
	I1216 04:29:39.857332  475694 start.go:564] Will wait 60s for crictl version
	I1216 04:29:39.857393  475694 ssh_runner.go:195] Run: which crictl
	I1216 04:29:39.860635  475694 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:29:39.860907  475694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:29:39.883882  475694 command_runner.go:130] > Version:  0.1.0
	I1216 04:29:39.883905  475694 command_runner.go:130] > RuntimeName:  cri-o
	I1216 04:29:39.883910  475694 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 04:29:39.883916  475694 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:29:39.886266  475694 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:29:39.886355  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.912976  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.913004  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.913011  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.913016  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.913021  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.913026  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.913030  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.913034  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.913044  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.913048  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.913052  475694 command_runner.go:130] >      static
	I1216 04:29:39.913055  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.913059  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.913089  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.913094  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.913097  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.913101  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.913104  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.913108  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.913112  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.915574  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.945490  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.945513  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.945520  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.945525  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.945530  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.945534  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.945538  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.945543  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.945548  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.945551  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.945557  475694 command_runner.go:130] >      static
	I1216 04:29:39.945561  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.945587  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.945594  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.945598  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.945601  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.945607  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.945617  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.945623  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.945639  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.952832  475694 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:29:39.955738  475694 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:29:39.972578  475694 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:29:39.976813  475694 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 04:29:39.976940  475694 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:29:39.977085  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:39.977157  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.026676  475694 command_runner.go:130] > {
	I1216 04:29:40.026700  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.026707  475694 command_runner.go:130] >     {
	I1216 04:29:40.026715  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.026721  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026727  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.026731  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026736  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026745  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.026758  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.026762  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026770  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.026775  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026789  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026796  475694 command_runner.go:130] >     },
	I1216 04:29:40.026800  475694 command_runner.go:130] >     {
	I1216 04:29:40.026807  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.026815  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026820  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.026827  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026831  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026843  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.026852  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.026859  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026863  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.026867  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026879  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026883  475694 command_runner.go:130] >     },
	I1216 04:29:40.026895  475694 command_runner.go:130] >     {
	I1216 04:29:40.026906  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.026917  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026927  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.026930  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026934  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026942  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.026954  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.026962  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026966  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.026974  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.026979  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026985  475694 command_runner.go:130] >     },
	I1216 04:29:40.026988  475694 command_runner.go:130] >     {
	I1216 04:29:40.026995  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.027002  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027012  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.027019  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027023  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027031  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.027041  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.027047  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027052  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.027058  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027063  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027070  475694 command_runner.go:130] >       },
	I1216 04:29:40.027084  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027096  475694 command_runner.go:130] >     },
	I1216 04:29:40.027100  475694 command_runner.go:130] >     {
	I1216 04:29:40.027106  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.027114  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027119  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.027129  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027138  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027146  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.027157  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.027161  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027168  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.027171  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027175  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027183  475694 command_runner.go:130] >       },
	I1216 04:29:40.027187  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027192  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027200  475694 command_runner.go:130] >     },
	I1216 04:29:40.027203  475694 command_runner.go:130] >     {
	I1216 04:29:40.027214  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.027229  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027235  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.027241  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027245  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027254  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.027266  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.027269  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027278  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.027281  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027288  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027292  475694 command_runner.go:130] >       },
	I1216 04:29:40.027300  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027305  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027311  475694 command_runner.go:130] >     },
	I1216 04:29:40.027314  475694 command_runner.go:130] >     {
	I1216 04:29:40.027320  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.027324  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027333  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.027337  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027345  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027357  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.027366  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.027372  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027376  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.027384  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027389  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027395  475694 command_runner.go:130] >     },
	I1216 04:29:40.027399  475694 command_runner.go:130] >     {
	I1216 04:29:40.027405  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.027409  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027423  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.027430  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027434  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027442  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.027466  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.027473  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027478  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.027485  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027489  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027492  475694 command_runner.go:130] >       },
	I1216 04:29:40.027498  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027507  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027514  475694 command_runner.go:130] >     },
	I1216 04:29:40.027517  475694 command_runner.go:130] >     {
	I1216 04:29:40.027524  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.027531  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027536  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.027542  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027547  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027557  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.027568  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.027573  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027586  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.027593  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027598  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.027601  475694 command_runner.go:130] >       },
	I1216 04:29:40.027610  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027614  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.027620  475694 command_runner.go:130] >     }
	I1216 04:29:40.027623  475694 command_runner.go:130] >   ]
	I1216 04:29:40.027626  475694 command_runner.go:130] > }
	I1216 04:29:40.029894  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.029927  475694 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:29:40.029987  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.055653  475694 command_runner.go:130] > {
	I1216 04:29:40.055673  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.055678  475694 command_runner.go:130] >     {
	I1216 04:29:40.055687  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.055692  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055697  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.055701  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055705  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055715  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.055724  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.055728  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055732  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.055736  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055740  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055744  475694 command_runner.go:130] >     },
	I1216 04:29:40.055747  475694 command_runner.go:130] >     {
	I1216 04:29:40.055753  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.055757  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055762  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.055765  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055769  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055787  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.055795  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.055798  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055802  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.055806  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055817  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055820  475694 command_runner.go:130] >     },
	I1216 04:29:40.055824  475694 command_runner.go:130] >     {
	I1216 04:29:40.055830  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.055833  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055838  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.055841  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055845  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055854  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.055862  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.055865  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055869  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.055873  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.055876  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055879  475694 command_runner.go:130] >     },
	I1216 04:29:40.055882  475694 command_runner.go:130] >     {
	I1216 04:29:40.055891  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.055894  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055899  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.055904  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055908  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055915  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.055923  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.055926  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055929  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.055933  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.055937  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.055940  475694 command_runner.go:130] >       },
	I1216 04:29:40.055952  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055956  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055959  475694 command_runner.go:130] >     },
	I1216 04:29:40.055961  475694 command_runner.go:130] >     {
	I1216 04:29:40.055968  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.055971  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055976  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.055979  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055983  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055990  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.055998  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.056001  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056005  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.056008  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056012  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056015  475694 command_runner.go:130] >       },
	I1216 04:29:40.056018  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056022  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056024  475694 command_runner.go:130] >     },
	I1216 04:29:40.056027  475694 command_runner.go:130] >     {
	I1216 04:29:40.056033  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.056037  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056043  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.056045  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056049  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056057  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.056065  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.056068  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056072  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.056075  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056079  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056082  475694 command_runner.go:130] >       },
	I1216 04:29:40.056085  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056096  475694 command_runner.go:130] >     },
	I1216 04:29:40.056099  475694 command_runner.go:130] >     {
	I1216 04:29:40.056106  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.056110  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056115  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.056118  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056122  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056130  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.056137  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.056141  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056144  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.056148  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056152  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056155  475694 command_runner.go:130] >     },
	I1216 04:29:40.056158  475694 command_runner.go:130] >     {
	I1216 04:29:40.056164  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.056168  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056173  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.056176  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056180  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056188  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.056204  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.056207  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056211  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.056215  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056218  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056221  475694 command_runner.go:130] >       },
	I1216 04:29:40.056225  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056228  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056231  475694 command_runner.go:130] >     },
	I1216 04:29:40.056233  475694 command_runner.go:130] >     {
	I1216 04:29:40.056240  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.056247  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056251  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.056255  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056259  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056266  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.056278  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.056281  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056285  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.056289  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056293  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.056296  475694 command_runner.go:130] >       },
	I1216 04:29:40.056299  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056303  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.056305  475694 command_runner.go:130] >     }
	I1216 04:29:40.056308  475694 command_runner.go:130] >   ]
	I1216 04:29:40.056312  475694 command_runner.go:130] > }
	I1216 04:29:40.057842  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.057866  475694 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:29:40.057874  475694 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:29:40.058028  475694 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:29:40.058117  475694 ssh_runner.go:195] Run: crio config
	I1216 04:29:40.108801  475694 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 04:29:40.108825  475694 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 04:29:40.108833  475694 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 04:29:40.108837  475694 command_runner.go:130] > #
	I1216 04:29:40.108844  475694 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 04:29:40.108850  475694 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 04:29:40.108857  475694 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 04:29:40.108874  475694 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 04:29:40.108891  475694 command_runner.go:130] > # reload'.
	I1216 04:29:40.108898  475694 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 04:29:40.108905  475694 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 04:29:40.108915  475694 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 04:29:40.108922  475694 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 04:29:40.108925  475694 command_runner.go:130] > [crio]
	I1216 04:29:40.108932  475694 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 04:29:40.108939  475694 command_runner.go:130] > # containers images, in this directory.
	I1216 04:29:40.109485  475694 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 04:29:40.109505  475694 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 04:29:40.110050  475694 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 04:29:40.110069  475694 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 04:29:40.110418  475694 command_runner.go:130] > # imagestore = ""
	I1216 04:29:40.110434  475694 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 04:29:40.110442  475694 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 04:29:40.110623  475694 command_runner.go:130] > # storage_driver = "overlay"
	I1216 04:29:40.110671  475694 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 04:29:40.110692  475694 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 04:29:40.110809  475694 command_runner.go:130] > # storage_option = [
	I1216 04:29:40.110816  475694 command_runner.go:130] > # ]
	I1216 04:29:40.110824  475694 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 04:29:40.110831  475694 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 04:29:40.110973  475694 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 04:29:40.110983  475694 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 04:29:40.111015  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 04:29:40.111021  475694 command_runner.go:130] > # always happen on a node reboot
	I1216 04:29:40.111194  475694 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 04:29:40.111214  475694 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 04:29:40.111221  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 04:29:40.111260  475694 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 04:29:40.111402  475694 command_runner.go:130] > # version_file_persist = ""
	I1216 04:29:40.111414  475694 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 04:29:40.111423  475694 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 04:29:40.111428  475694 command_runner.go:130] > # internal_wipe = true
	I1216 04:29:40.111436  475694 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 04:29:40.111471  475694 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 04:29:40.111604  475694 command_runner.go:130] > # internal_repair = true
	I1216 04:29:40.111614  475694 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 04:29:40.111621  475694 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 04:29:40.111626  475694 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 04:29:40.111750  475694 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 04:29:40.111761  475694 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 04:29:40.111764  475694 command_runner.go:130] > [crio.api]
	I1216 04:29:40.111770  475694 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 04:29:40.111973  475694 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 04:29:40.111983  475694 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 04:29:40.112123  475694 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 04:29:40.112134  475694 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 04:29:40.112139  475694 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 04:29:40.112334  475694 command_runner.go:130] > # stream_port = "0"
	I1216 04:29:40.112344  475694 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 04:29:40.112496  475694 command_runner.go:130] > # stream_enable_tls = false
	I1216 04:29:40.112506  475694 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 04:29:40.112646  475694 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 04:29:40.112658  475694 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 04:29:40.112664  475694 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112790  475694 command_runner.go:130] > # stream_tls_cert = ""
	I1216 04:29:40.112800  475694 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 04:29:40.112806  475694 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112930  475694 command_runner.go:130] > # stream_tls_key = ""
	I1216 04:29:40.112940  475694 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 04:29:40.112947  475694 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 04:29:40.112956  475694 command_runner.go:130] > # automatically pick up the changes.
	I1216 04:29:40.113120  475694 command_runner.go:130] > # stream_tls_ca = ""
	I1216 04:29:40.113148  475694 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113407  475694 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 04:29:40.113455  475694 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113595  475694 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 04:29:40.113624  475694 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 04:29:40.113657  475694 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 04:29:40.113680  475694 command_runner.go:130] > [crio.runtime]
	I1216 04:29:40.113702  475694 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 04:29:40.113736  475694 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 04:29:40.113757  475694 command_runner.go:130] > # "nofile=1024:2048"
	I1216 04:29:40.113777  475694 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 04:29:40.113795  475694 command_runner.go:130] > # default_ulimits = [
	I1216 04:29:40.113822  475694 command_runner.go:130] > # ]
	I1216 04:29:40.113845  475694 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 04:29:40.113998  475694 command_runner.go:130] > # no_pivot = false
	I1216 04:29:40.114026  475694 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 04:29:40.114058  475694 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 04:29:40.114076  475694 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 04:29:40.114109  475694 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 04:29:40.114138  475694 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 04:29:40.114159  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114189  475694 command_runner.go:130] > # conmon = ""
	I1216 04:29:40.114211  475694 command_runner.go:130] > # Cgroup setting for conmon
	I1216 04:29:40.114233  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 04:29:40.114382  475694 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 04:29:40.114414  475694 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 04:29:40.114449  475694 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 04:29:40.114469  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114514  475694 command_runner.go:130] > # conmon_env = [
	I1216 04:29:40.114538  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114560  475694 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 04:29:40.114591  475694 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 04:29:40.114614  475694 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 04:29:40.114632  475694 command_runner.go:130] > # default_env = [
	I1216 04:29:40.114649  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114679  475694 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 04:29:40.114706  475694 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 04:29:40.114884  475694 command_runner.go:130] > # selinux = false
	I1216 04:29:40.114896  475694 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 04:29:40.114903  475694 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 04:29:40.114909  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114913  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.114950  475694 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 04:29:40.114969  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114984  475694 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 04:29:40.115020  475694 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 04:29:40.115046  475694 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 04:29:40.115055  475694 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 04:29:40.115062  475694 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 04:29:40.115067  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115072  475694 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 04:29:40.115077  475694 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 04:29:40.115116  475694 command_runner.go:130] > # the cgroup blockio controller.
	I1216 04:29:40.115133  475694 command_runner.go:130] > # blockio_config_file = ""
	I1216 04:29:40.115175  475694 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 04:29:40.115196  475694 command_runner.go:130] > # blockio parameters.
	I1216 04:29:40.115214  475694 command_runner.go:130] > # blockio_reload = false
	I1216 04:29:40.115235  475694 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 04:29:40.115262  475694 command_runner.go:130] > # irqbalance daemon.
	I1216 04:29:40.115417  475694 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 04:29:40.115505  475694 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 04:29:40.115615  475694 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 04:29:40.115655  475694 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 04:29:40.115678  475694 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 04:29:40.115698  475694 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 04:29:40.115716  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115745  475694 command_runner.go:130] > # rdt_config_file = ""
	I1216 04:29:40.115769  475694 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 04:29:40.115788  475694 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 04:29:40.115822  475694 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 04:29:40.115844  475694 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 04:29:40.115864  475694 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 04:29:40.115884  475694 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 04:29:40.115919  475694 command_runner.go:130] > # will be added.
	I1216 04:29:40.115936  475694 command_runner.go:130] > # default_capabilities = [
	I1216 04:29:40.115952  475694 command_runner.go:130] > # 	"CHOWN",
	I1216 04:29:40.115983  475694 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 04:29:40.116006  475694 command_runner.go:130] > # 	"FSETID",
	I1216 04:29:40.116024  475694 command_runner.go:130] > # 	"FOWNER",
	I1216 04:29:40.116040  475694 command_runner.go:130] > # 	"SETGID",
	I1216 04:29:40.116070  475694 command_runner.go:130] > # 	"SETUID",
	I1216 04:29:40.116112  475694 command_runner.go:130] > # 	"SETPCAP",
	I1216 04:29:40.116150  475694 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 04:29:40.116170  475694 command_runner.go:130] > # 	"KILL",
	I1216 04:29:40.116187  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116209  475694 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 04:29:40.116243  475694 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 04:29:40.116264  475694 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 04:29:40.116284  475694 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 04:29:40.116316  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116336  475694 command_runner.go:130] > default_sysctls = [
	I1216 04:29:40.116352  475694 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 04:29:40.116370  475694 command_runner.go:130] > ]
	I1216 04:29:40.116402  475694 command_runner.go:130] > # List of devices on the host that a
	I1216 04:29:40.116430  475694 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 04:29:40.116449  475694 command_runner.go:130] > # allowed_devices = [
	I1216 04:29:40.116482  475694 command_runner.go:130] > # 	"/dev/fuse",
	I1216 04:29:40.116502  475694 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 04:29:40.116519  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116549  475694 command_runner.go:130] > # List of additional devices. specified as
	I1216 04:29:40.116842  475694 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 04:29:40.116898  475694 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 04:29:40.116921  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116950  475694 command_runner.go:130] > # additional_devices = [
	I1216 04:29:40.116977  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116996  475694 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 04:29:40.117028  475694 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 04:29:40.117054  475694 command_runner.go:130] > # 	"/etc/cdi",
	I1216 04:29:40.117101  475694 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 04:29:40.117118  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117139  475694 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 04:29:40.117174  475694 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 04:29:40.117193  475694 command_runner.go:130] > # Defaults to false.
	I1216 04:29:40.117222  475694 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 04:29:40.117264  475694 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 04:29:40.117284  475694 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 04:29:40.117301  475694 command_runner.go:130] > # hooks_dir = [
	I1216 04:29:40.117338  475694 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 04:29:40.117357  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117377  475694 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 04:29:40.117412  475694 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 04:29:40.117421  475694 command_runner.go:130] > # its default mounts from the following two files:
	I1216 04:29:40.117425  475694 command_runner.go:130] > #
	I1216 04:29:40.117432  475694 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 04:29:40.117438  475694 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 04:29:40.117444  475694 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 04:29:40.117447  475694 command_runner.go:130] > #
	I1216 04:29:40.117454  475694 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 04:29:40.117461  475694 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 04:29:40.117467  475694 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 04:29:40.117517  475694 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 04:29:40.117534  475694 command_runner.go:130] > #
	I1216 04:29:40.117567  475694 command_runner.go:130] > # default_mounts_file = ""
	I1216 04:29:40.117599  475694 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 04:29:40.117644  475694 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 04:29:40.117670  475694 command_runner.go:130] > # pids_limit = -1
	I1216 04:29:40.117691  475694 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 04:29:40.117725  475694 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 04:29:40.117753  475694 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 04:29:40.117773  475694 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 04:29:40.117806  475694 command_runner.go:130] > # log_size_max = -1
	I1216 04:29:40.117830  475694 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 04:29:40.117850  475694 command_runner.go:130] > # log_to_journald = false
	I1216 04:29:40.117889  475694 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 04:29:40.117908  475694 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 04:29:40.117927  475694 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 04:29:40.117963  475694 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 04:29:40.117992  475694 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 04:29:40.118011  475694 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 04:29:40.118045  475694 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 04:29:40.118064  475694 command_runner.go:130] > # read_only = false
	I1216 04:29:40.118085  475694 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 04:29:40.118118  475694 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 04:29:40.118145  475694 command_runner.go:130] > # live configuration reload.
	I1216 04:29:40.118163  475694 command_runner.go:130] > # log_level = "info"
	I1216 04:29:40.118200  475694 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 04:29:40.118229  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.118246  475694 command_runner.go:130] > # log_filter = ""
	I1216 04:29:40.118284  475694 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118305  475694 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 04:29:40.118324  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118360  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118379  475694 command_runner.go:130] > # uid_mappings = ""
	I1216 04:29:40.118400  475694 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118433  475694 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 04:29:40.118453  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118475  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118516  475694 command_runner.go:130] > # gid_mappings = ""
	I1216 04:29:40.118547  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 04:29:40.118581  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118608  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.118630  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118663  475694 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 04:29:40.118694  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 04:29:40.118716  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118867  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.119059  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.119080  475694 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 04:29:40.119119  475694 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 04:29:40.119149  475694 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 04:29:40.119169  475694 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 04:29:40.119206  475694 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 04:29:40.119228  475694 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 04:29:40.119249  475694 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 04:29:40.119286  475694 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 04:29:40.119304  475694 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 04:29:40.119323  475694 command_runner.go:130] > # drop_infra_ctr = true
	I1216 04:29:40.119357  475694 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 04:29:40.119378  475694 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 04:29:40.119425  475694 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 04:29:40.119453  475694 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 04:29:40.119476  475694 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 04:29:40.119511  475694 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 04:29:40.119541  475694 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 04:29:40.119560  475694 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 04:29:40.119590  475694 command_runner.go:130] > # shared_cpuset = ""
	I1216 04:29:40.119612  475694 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 04:29:40.119632  475694 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 04:29:40.119663  475694 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 04:29:40.119695  475694 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 04:29:40.119739  475694 command_runner.go:130] > # pinns_path = ""
	I1216 04:29:40.119766  475694 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 04:29:40.119787  475694 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 04:29:40.119820  475694 command_runner.go:130] > # enable_criu_support = true
	I1216 04:29:40.119849  475694 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 04:29:40.119870  475694 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 04:29:40.119901  475694 command_runner.go:130] > # enable_pod_events = false
	I1216 04:29:40.119923  475694 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 04:29:40.119945  475694 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 04:29:40.119977  475694 command_runner.go:130] > # default_runtime = "crun"
	I1216 04:29:40.120005  475694 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 04:29:40.120029  475694 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 04:29:40.120074  475694 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 04:29:40.120094  475694 command_runner.go:130] > # creation as a file is not desired either.
	I1216 04:29:40.120134  475694 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 04:29:40.120162  475694 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 04:29:40.120182  475694 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 04:29:40.120216  475694 command_runner.go:130] > # ]
	I1216 04:29:40.120248  475694 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 04:29:40.120270  475694 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 04:29:40.120320  475694 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 04:29:40.120347  475694 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 04:29:40.120396  475694 command_runner.go:130] > #
	I1216 04:29:40.120416  475694 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 04:29:40.120435  475694 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 04:29:40.120469  475694 command_runner.go:130] > # runtime_type = "oci"
	I1216 04:29:40.120490  475694 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 04:29:40.120514  475694 command_runner.go:130] > # inherit_default_runtime = false
	I1216 04:29:40.120552  475694 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 04:29:40.120570  475694 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 04:29:40.120589  475694 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 04:29:40.120618  475694 command_runner.go:130] > # monitor_env = []
	I1216 04:29:40.120639  475694 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 04:29:40.120667  475694 command_runner.go:130] > # allowed_annotations = []
	I1216 04:29:40.120700  475694 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 04:29:40.120720  475694 command_runner.go:130] > # no_sync_log = false
	I1216 04:29:40.120739  475694 command_runner.go:130] > # default_annotations = {}
	I1216 04:29:40.120771  475694 command_runner.go:130] > # stream_websockets = false
	I1216 04:29:40.120795  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.120859  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.120892  475694 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 04:29:40.120926  475694 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 04:29:40.120956  475694 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 04:29:40.120976  475694 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 04:29:40.121008  475694 command_runner.go:130] > #   in $PATH.
	I1216 04:29:40.121038  475694 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 04:29:40.121057  475694 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 04:29:40.121115  475694 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 04:29:40.121133  475694 command_runner.go:130] > #   state.
	I1216 04:29:40.121155  475694 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 04:29:40.121189  475694 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 04:29:40.121228  475694 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 04:29:40.121250  475694 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 04:29:40.121270  475694 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 04:29:40.121300  475694 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 04:29:40.121328  475694 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 04:29:40.121349  475694 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 04:29:40.121370  475694 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 04:29:40.121404  475694 command_runner.go:130] > #   The currently recognized values are:
	I1216 04:29:40.121434  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 04:29:40.121457  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 04:29:40.121484  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 04:29:40.121518  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 04:29:40.121541  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 04:29:40.121564  475694 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 04:29:40.121592  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 04:29:40.121620  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 04:29:40.121640  475694 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 04:29:40.121671  475694 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 04:29:40.121692  475694 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 04:29:40.121712  475694 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 04:29:40.121747  475694 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 04:29:40.121775  475694 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 04:29:40.121796  475694 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 04:29:40.121818  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 04:29:40.121849  475694 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 04:29:40.121873  475694 command_runner.go:130] > #   deprecated option "conmon".
	I1216 04:29:40.121896  475694 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 04:29:40.121916  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 04:29:40.121945  475694 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 04:29:40.121969  475694 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 04:29:40.121989  475694 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 04:29:40.122009  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 04:29:40.122039  475694 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 04:29:40.122065  475694 command_runner.go:130] > #   conmon-rs by using:
	I1216 04:29:40.122085  475694 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 04:29:40.122108  475694 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 04:29:40.122138  475694 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 04:29:40.122166  475694 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 04:29:40.122184  475694 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 04:29:40.122204  475694 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 04:29:40.122236  475694 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 04:29:40.122262  475694 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 04:29:40.122285  475694 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 04:29:40.122332  475694 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 04:29:40.122360  475694 command_runner.go:130] > #   when a machine crash happens.
	I1216 04:29:40.122382  475694 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 04:29:40.122406  475694 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 04:29:40.122443  475694 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 04:29:40.122473  475694 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 04:29:40.122495  475694 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 04:29:40.122537  475694 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 04:29:40.122553  475694 command_runner.go:130] > #
	I1216 04:29:40.122572  475694 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 04:29:40.122589  475694 command_runner.go:130] > #
	I1216 04:29:40.122624  475694 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 04:29:40.122646  475694 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 04:29:40.122662  475694 command_runner.go:130] > #
	I1216 04:29:40.122693  475694 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 04:29:40.122721  475694 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 04:29:40.122737  475694 command_runner.go:130] > #
	I1216 04:29:40.122758  475694 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 04:29:40.122777  475694 command_runner.go:130] > # feature.
	I1216 04:29:40.122810  475694 command_runner.go:130] > #
	I1216 04:29:40.122842  475694 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 04:29:40.122863  475694 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 04:29:40.122893  475694 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 04:29:40.122913  475694 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 04:29:40.122933  475694 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 04:29:40.122960  475694 command_runner.go:130] > #
	I1216 04:29:40.122986  475694 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 04:29:40.123006  475694 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 04:29:40.123023  475694 command_runner.go:130] > #
	I1216 04:29:40.123043  475694 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 04:29:40.123079  475694 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 04:29:40.123096  475694 command_runner.go:130] > #
	I1216 04:29:40.123117  475694 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 04:29:40.123147  475694 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 04:29:40.123171  475694 command_runner.go:130] > # limitation.
	I1216 04:29:40.123187  475694 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 04:29:40.123204  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 04:29:40.123225  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123264  475694 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 04:29:40.123284  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123302  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123331  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123357  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123375  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123394  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123413  475694 command_runner.go:130] > allowed_annotations = [
	I1216 04:29:40.123445  475694 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 04:29:40.123463  475694 command_runner.go:130] > ]
	I1216 04:29:40.123482  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123501  475694 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 04:29:40.123534  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 04:29:40.123552  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123570  475694 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 04:29:40.123589  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123625  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123644  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123670  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123707  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123742  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123785  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123815  475694 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 04:29:40.123837  475694 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 04:29:40.123859  475694 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 04:29:40.123892  475694 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 04:29:40.123918  475694 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 04:29:40.123943  475694 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 04:29:40.123978  475694 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 04:29:40.123998  475694 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 04:29:40.124022  475694 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 04:29:40.124054  475694 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 04:29:40.124075  475694 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 04:29:40.124108  475694 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 04:29:40.124142  475694 command_runner.go:130] > # Example:
	I1216 04:29:40.124163  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 04:29:40.124183  475694 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 04:29:40.124217  475694 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 04:29:40.124245  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 04:29:40.124262  475694 command_runner.go:130] > # cpuset = "0-1"
	I1216 04:29:40.124279  475694 command_runner.go:130] > # cpushares = "5"
	I1216 04:29:40.124296  475694 command_runner.go:130] > # cpuquota = "1000"
	I1216 04:29:40.124329  475694 command_runner.go:130] > # cpuperiod = "100000"
	I1216 04:29:40.124347  475694 command_runner.go:130] > # cpulimit = "35"
	I1216 04:29:40.124367  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.124385  475694 command_runner.go:130] > # The workload name is workload-type.
	I1216 04:29:40.124421  475694 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 04:29:40.124440  475694 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 04:29:40.124460  475694 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 04:29:40.124492  475694 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 04:29:40.124517  475694 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 04:29:40.124536  475694 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 04:29:40.124556  475694 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 04:29:40.124575  475694 command_runner.go:130] > # Default value is set to true
	I1216 04:29:40.124610  475694 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 04:29:40.124630  475694 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 04:29:40.124649  475694 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 04:29:40.124667  475694 command_runner.go:130] > # Default value is set to 'false'
	I1216 04:29:40.124699  475694 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 04:29:40.124718  475694 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 04:29:40.124741  475694 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 04:29:40.124768  475694 command_runner.go:130] > # timezone = ""
	I1216 04:29:40.124795  475694 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 04:29:40.124810  475694 command_runner.go:130] > #
	I1216 04:29:40.124829  475694 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 04:29:40.124850  475694 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 04:29:40.124892  475694 command_runner.go:130] > [crio.image]
	I1216 04:29:40.124912  475694 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 04:29:40.124930  475694 command_runner.go:130] > # default_transport = "docker://"
	I1216 04:29:40.124959  475694 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 04:29:40.125019  475694 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125026  475694 command_runner.go:130] > # global_auth_file = ""
	I1216 04:29:40.125031  475694 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 04:29:40.125036  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125041  475694 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.125093  475694 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 04:29:40.125106  475694 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125111  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125121  475694 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 04:29:40.125127  475694 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 04:29:40.125133  475694 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 04:29:40.125139  475694 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 04:29:40.125145  475694 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 04:29:40.125160  475694 command_runner.go:130] > # pause_command = "/pause"
	I1216 04:29:40.125167  475694 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 04:29:40.125172  475694 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 04:29:40.125178  475694 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 04:29:40.125184  475694 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 04:29:40.125190  475694 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 04:29:40.125198  475694 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 04:29:40.125209  475694 command_runner.go:130] > # pinned_images = [
	I1216 04:29:40.125213  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125219  475694 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 04:29:40.125226  475694 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 04:29:40.125232  475694 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 04:29:40.125238  475694 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 04:29:40.125243  475694 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 04:29:40.125248  475694 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 04:29:40.125253  475694 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 04:29:40.125268  475694 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 04:29:40.125275  475694 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 04:29:40.125281  475694 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 04:29:40.125287  475694 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 04:29:40.125291  475694 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 04:29:40.125298  475694 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 04:29:40.125304  475694 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 04:29:40.125308  475694 command_runner.go:130] > # changing them here.
	I1216 04:29:40.125313  475694 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 04:29:40.125317  475694 command_runner.go:130] > # insecure_registries = [
	I1216 04:29:40.125325  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125331  475694 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 04:29:40.125338  475694 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 04:29:40.125343  475694 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 04:29:40.125348  475694 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 04:29:40.125352  475694 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 04:29:40.125358  475694 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 04:29:40.125365  475694 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 04:29:40.125369  475694 command_runner.go:130] > # auto_reload_registries = false
	I1216 04:29:40.125375  475694 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 04:29:40.125386  475694 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 04:29:40.125392  475694 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 04:29:40.125396  475694 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 04:29:40.125400  475694 command_runner.go:130] > # The mode of short name resolution.
	I1216 04:29:40.125406  475694 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 04:29:40.125414  475694 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 04:29:40.125419  475694 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 04:29:40.125422  475694 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 04:29:40.125428  475694 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 04:29:40.125435  475694 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 04:29:40.125439  475694 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 04:29:40.125445  475694 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 04:29:40.125449  475694 command_runner.go:130] > # CNI plugins.
	I1216 04:29:40.125456  475694 command_runner.go:130] > [crio.network]
	I1216 04:29:40.125462  475694 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 04:29:40.125467  475694 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 04:29:40.125471  475694 command_runner.go:130] > # cni_default_network = ""
	I1216 04:29:40.125476  475694 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 04:29:40.125481  475694 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 04:29:40.125487  475694 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 04:29:40.125498  475694 command_runner.go:130] > # plugin_dirs = [
	I1216 04:29:40.125501  475694 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 04:29:40.125504  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125508  475694 command_runner.go:130] > # List of included pod metrics.
	I1216 04:29:40.125512  475694 command_runner.go:130] > # included_pod_metrics = [
	I1216 04:29:40.125515  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125521  475694 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 04:29:40.125524  475694 command_runner.go:130] > [crio.metrics]
	I1216 04:29:40.125529  475694 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 04:29:40.125533  475694 command_runner.go:130] > # enable_metrics = false
	I1216 04:29:40.125537  475694 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 04:29:40.125542  475694 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 04:29:40.125549  475694 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 04:29:40.125557  475694 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 04:29:40.125564  475694 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 04:29:40.125568  475694 command_runner.go:130] > # metrics_collectors = [
	I1216 04:29:40.125572  475694 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 04:29:40.125576  475694 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 04:29:40.125580  475694 command_runner.go:130] > # 	"containers_oom_total",
	I1216 04:29:40.125584  475694 command_runner.go:130] > # 	"processes_defunct",
	I1216 04:29:40.125587  475694 command_runner.go:130] > # 	"operations_total",
	I1216 04:29:40.125591  475694 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 04:29:40.125596  475694 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 04:29:40.125600  475694 command_runner.go:130] > # 	"operations_errors_total",
	I1216 04:29:40.125604  475694 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 04:29:40.125608  475694 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 04:29:40.125615  475694 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 04:29:40.125619  475694 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 04:29:40.125623  475694 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 04:29:40.125627  475694 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 04:29:40.125632  475694 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 04:29:40.125636  475694 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 04:29:40.125640  475694 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 04:29:40.125643  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125649  475694 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 04:29:40.125653  475694 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 04:29:40.125658  475694 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 04:29:40.125662  475694 command_runner.go:130] > # metrics_port = 9090
	I1216 04:29:40.125667  475694 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 04:29:40.125670  475694 command_runner.go:130] > # metrics_socket = ""
	I1216 04:29:40.125678  475694 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 04:29:40.125684  475694 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 04:29:40.125690  475694 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 04:29:40.125694  475694 command_runner.go:130] > # certificate on any modification event.
	I1216 04:29:40.125698  475694 command_runner.go:130] > # metrics_cert = ""
	I1216 04:29:40.125703  475694 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 04:29:40.125708  475694 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 04:29:40.125711  475694 command_runner.go:130] > # metrics_key = ""
	I1216 04:29:40.125718  475694 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 04:29:40.125721  475694 command_runner.go:130] > [crio.tracing]
	I1216 04:29:40.125726  475694 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 04:29:40.125730  475694 command_runner.go:130] > # enable_tracing = false
	I1216 04:29:40.125735  475694 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 04:29:40.125740  475694 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 04:29:40.125747  475694 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 04:29:40.125753  475694 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 04:29:40.125757  475694 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 04:29:40.125760  475694 command_runner.go:130] > [crio.nri]
	I1216 04:29:40.125764  475694 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 04:29:40.125772  475694 command_runner.go:130] > # enable_nri = true
	I1216 04:29:40.125776  475694 command_runner.go:130] > # NRI socket to listen on.
	I1216 04:29:40.125781  475694 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 04:29:40.125785  475694 command_runner.go:130] > # NRI plugin directory to use.
	I1216 04:29:40.125789  475694 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 04:29:40.125794  475694 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 04:29:40.125799  475694 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 04:29:40.125804  475694 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 04:29:40.125861  475694 command_runner.go:130] > # nri_disable_connections = false
	I1216 04:29:40.125867  475694 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 04:29:40.125871  475694 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 04:29:40.125876  475694 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 04:29:40.125881  475694 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 04:29:40.125885  475694 command_runner.go:130] > # NRI default validator configuration.
	I1216 04:29:40.125892  475694 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 04:29:40.125898  475694 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 04:29:40.125902  475694 command_runner.go:130] > # can be restricted/rejected:
	I1216 04:29:40.125905  475694 command_runner.go:130] > # - OCI hook injection
	I1216 04:29:40.125910  475694 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 04:29:40.125915  475694 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 04:29:40.125919  475694 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 04:29:40.125923  475694 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 04:29:40.125929  475694 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 04:29:40.125936  475694 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 04:29:40.125941  475694 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 04:29:40.125944  475694 command_runner.go:130] > #
	I1216 04:29:40.125948  475694 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 04:29:40.125953  475694 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 04:29:40.125958  475694 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 04:29:40.125963  475694 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 04:29:40.125969  475694 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 04:29:40.125974  475694 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 04:29:40.125979  475694 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 04:29:40.125986  475694 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 04:29:40.125991  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125996  475694 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 04:29:40.126002  475694 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 04:29:40.126007  475694 command_runner.go:130] > [crio.stats]
	I1216 04:29:40.126013  475694 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 04:29:40.126018  475694 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 04:29:40.126022  475694 command_runner.go:130] > # stats_collection_period = 0
	I1216 04:29:40.126028  475694 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 04:29:40.126034  475694 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 04:29:40.126038  475694 command_runner.go:130] > # collection_period = 0
	I1216 04:29:40.126084  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086834829Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 04:29:40.126093  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086875912Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 04:29:40.126103  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086913837Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 04:29:40.126111  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086943031Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 04:29:40.126123  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087027733Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:40.126132  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087362399Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 04:29:40.126142  475694 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 04:29:40.126226  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:40.126235  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:40.126255  475694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:29:40.126277  475694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:29:40.126422  475694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:29:40.126497  475694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:29:40.134815  475694 command_runner.go:130] > kubeadm
	I1216 04:29:40.134839  475694 command_runner.go:130] > kubectl
	I1216 04:29:40.134844  475694 command_runner.go:130] > kubelet
	I1216 04:29:40.134872  475694 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:29:40.134932  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:29:40.143529  475694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:29:40.156375  475694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:29:40.169188  475694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 04:29:40.182223  475694 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:29:40.185968  475694 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:29:40.186105  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:40.327743  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:41.068736  475694 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:29:41.068757  475694 certs.go:195] generating shared ca certs ...
	I1216 04:29:41.068779  475694 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.069050  475694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:29:41.069145  475694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:29:41.069172  475694 certs.go:257] generating profile certs ...
	I1216 04:29:41.069366  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:29:41.069439  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:29:41.069492  475694 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:29:41.069508  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:29:41.069527  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:29:41.069550  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:29:41.069568  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:29:41.069598  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:29:41.069624  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:29:41.069636  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:29:41.069661  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:29:41.069722  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:29:41.069792  475694 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:29:41.069804  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:29:41.069832  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:29:41.069864  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:29:41.069933  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:29:41.070011  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:41.070050  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.070068  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.070082  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem -> /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.070740  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:29:41.088516  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:29:41.106273  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:29:41.124169  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:29:41.142346  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:29:41.160632  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:29:41.181690  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:29:41.199949  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:29:41.217789  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:29:41.237601  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:29:41.255073  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:29:41.272738  475694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:29:41.286149  475694 ssh_runner.go:195] Run: openssl version
	I1216 04:29:41.292023  475694 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:29:41.292477  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.299852  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:29:41.307795  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312150  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312182  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312250  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.353168  475694 command_runner.go:130] > 3ec20f2e
	I1216 04:29:41.353674  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:29:41.362516  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.370150  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:29:41.377841  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381956  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381986  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.382040  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.422880  475694 command_runner.go:130] > b5213941
	I1216 04:29:41.423347  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:29:41.430980  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.438640  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:29:41.446570  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450618  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450691  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450770  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.493534  475694 command_runner.go:130] > 51391683
	I1216 04:29:41.494044  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:29:41.501730  475694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505651  475694 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505723  475694 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:29:41.505736  475694 command_runner.go:130] > Device: 259,1	Inode: 1313043     Links: 1
	I1216 04:29:41.505744  475694 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:41.505751  475694 command_runner.go:130] > Access: 2025-12-16 04:25:32.918538317 +0000
	I1216 04:29:41.505756  475694 command_runner.go:130] > Modify: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505760  475694 command_runner.go:130] > Change: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505765  475694 command_runner.go:130] >  Birth: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505860  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:29:41.547026  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.547554  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:29:41.588926  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.589431  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:29:41.630503  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.630976  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:29:41.679374  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.679872  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:29:41.720872  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.720962  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:29:41.763843  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.764306  475694 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:41.764397  475694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:29:41.764473  475694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:29:41.794813  475694 cri.go:89] found id: ""
	I1216 04:29:41.795018  475694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:29:41.802238  475694 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:29:41.802260  475694 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:29:41.802267  475694 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:29:41.803148  475694 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:29:41.803169  475694 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:29:41.803241  475694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:29:41.810442  475694 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:29:41.810892  475694 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-763073" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811005  475694 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-438353/kubeconfig needs updating (will repair): [kubeconfig missing "functional-763073" cluster setting kubeconfig missing "functional-763073" context setting]
	I1216 04:29:41.811272  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.811696  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811844  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.812430  475694 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:29:41.812449  475694 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:29:41.812455  475694 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:29:41.812459  475694 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:29:41.812464  475694 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:29:41.812504  475694 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:29:41.812753  475694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:29:41.827245  475694 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 04:29:41.827324  475694 kubeadm.go:602] duration metric: took 24.148626ms to restartPrimaryControlPlane
	I1216 04:29:41.827348  475694 kubeadm.go:403] duration metric: took 63.050551ms to StartCluster
	I1216 04:29:41.827392  475694 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.827497  475694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.828225  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.828522  475694 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:29:41.828868  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:41.828926  475694 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:29:41.829003  475694 addons.go:70] Setting storage-provisioner=true in profile "functional-763073"
	I1216 04:29:41.829025  475694 addons.go:239] Setting addon storage-provisioner=true in "functional-763073"
	I1216 04:29:41.829051  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.829717  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.829866  475694 addons.go:70] Setting default-storageclass=true in profile "functional-763073"
	I1216 04:29:41.829889  475694 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-763073"
	I1216 04:29:41.830179  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.835425  475694 out.go:179] * Verifying Kubernetes components...
	I1216 04:29:41.843204  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:41.852282  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.852487  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.852847  475694 addons.go:239] Setting addon default-storageclass=true in "functional-763073"
	I1216 04:29:41.852883  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.853441  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.902066  475694 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:29:41.905129  475694 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:41.905181  475694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:29:41.905276  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.908977  475694 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:41.909002  475694 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:29:41.909132  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.960105  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:41.975058  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:42.043859  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:42.092471  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:42.106008  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:42.818195  475694 node_ready.go:35] waiting up to 6m0s for node "functional-763073" to be "Ready" ...
	I1216 04:29:42.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:29:42.818432  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:42.818659  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818682  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818701  475694 retry.go:31] will retry after 327.643243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818740  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818752  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818759  475694 retry.go:31] will retry after 171.339125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:42.990327  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.052462  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.052555  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.052597  475694 retry.go:31] will retry after 320.089446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.146742  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.207665  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.212209  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.212243  475694 retry.go:31] will retry after 291.464307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.318472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.318814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.373308  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.435189  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.435254  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.435280  475694 retry.go:31] will retry after 781.758867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.504448  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.571334  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.571371  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.571390  475694 retry.go:31] will retry after 332.937553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.818906  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.818991  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.819297  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.904706  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.962384  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.966307  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.966396  475694 retry.go:31] will retry after 1.136896719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.217759  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:44.279618  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:44.283381  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.283415  475694 retry.go:31] will retry after 1.1051557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.318552  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.318673  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.319015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:44.818498  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.818571  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:44.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:45.103534  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:45.194787  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.195010  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.195099  475694 retry.go:31] will retry after 1.211699823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.319146  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.319235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.319562  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:45.388763  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:45.456804  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.456849  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.456877  475694 retry.go:31] will retry after 720.865488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.819295  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.819381  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.819670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.178239  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:46.241684  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.241730  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.241750  475694 retry.go:31] will retry after 2.398929444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.318930  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.319008  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.319303  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.407630  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:46.476894  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.476941  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.476959  475694 retry.go:31] will retry after 1.300502308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.818702  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.819124  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:46.819187  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:47.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.318594  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.778651  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:47.819040  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.819112  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.836852  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:47.840282  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:47.840312  475694 retry.go:31] will retry after 3.994114703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.318482  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.318555  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:48.641498  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:48.705855  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:48.705903  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.705923  475694 retry.go:31] will retry after 1.757515206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.819100  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.819185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.819457  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:48.819514  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:49.319285  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.319697  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:49.819385  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.318415  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.318509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.464331  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:50.523255  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:50.523310  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.523330  475694 retry.go:31] will retry after 5.029530817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.318457  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:51.318895  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:51.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.819120  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.834846  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:51.906733  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:51.906789  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:51.906807  475694 retry.go:31] will retry after 4.132534587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:52.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.319782  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:52.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.818481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.818820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.318781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.818436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:53.818768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:54.318470  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.318855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:54.818416  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.818791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.318563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.318906  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.553265  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:55.626702  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:55.630832  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.630867  475694 retry.go:31] will retry after 7.132223529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.819263  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.819349  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.819703  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:55.819756  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:56.040181  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:56.104678  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:56.104716  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.104735  475694 retry.go:31] will retry after 8.857583825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.319119  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.319453  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:56.819390  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.819466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.819757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.319466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.818722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:58.319396  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.319513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.319927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:58.319980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:58.818648  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.818727  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.318403  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.818568  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.318660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.818779  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.818900  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.819255  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:00.819314  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:01.318812  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.318904  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.319269  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:01.818988  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.819066  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.819335  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.319195  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.319286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.763349  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:02.818891  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.818969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.819274  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.830785  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:02.830835  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:02.830855  475694 retry.go:31] will retry after 11.115111011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:03.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.318492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:03.318795  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:03.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.818567  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.318356  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.318791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.819354  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.819425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.963132  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:05.030528  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:05.030573  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.030594  475694 retry.go:31] will retry after 13.807129774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.319025  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.319109  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.319430  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:05.319487  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:05.819077  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.819160  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.819454  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.319216  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.319298  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.319561  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.818566  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.818640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.818960  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.319006  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.319080  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.319410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.819153  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.819235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.819526  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:07.819580  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:08.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.319439  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.319857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:08.818460  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.318512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.318769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.818489  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.818572  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:10.318546  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.319011  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:10.319072  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:10.818626  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.819016  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.818916  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.818993  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.819322  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:12.319122  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.319197  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.319465  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:12.319515  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:12.819218  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.819289  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.819619  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.318424  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.318745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.946231  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:14.010550  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:14.014827  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.014869  475694 retry.go:31] will retry after 8.112010712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.319336  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.319410  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.319731  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:14.319784  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:14.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.818426  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.818781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.319376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.319444  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.319700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.818487  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:16.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.319765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:16.319823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:16.818746  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.818828  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.819089  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.318878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.818576  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.818652  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.818985  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.318670  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.319008  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:18.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:18.838055  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:18.893739  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:18.897596  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:18.897631  475694 retry.go:31] will retry after 11.366080685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:19.319301  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.319380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.319681  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:19.819376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.819724  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.818403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:21.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:21.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:21.818866  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.818958  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.819324  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.127748  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:22.189082  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:22.189129  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.189148  475694 retry.go:31] will retry after 27.844564007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.319433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.818358  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.818435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.818698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:23.319415  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.319492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.319809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:23.319865  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:23.818531  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.818610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.818962  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.318495  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.318564  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.318816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.818435  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.818856  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.318545  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.318628  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.318920  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.818420  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:25.818900  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:26.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.318905  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:26.818764  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.819183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.318950  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.319026  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.319288  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.819187  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.819262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.819610  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:27.819670  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:28.319414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.319507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.319802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:28.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.818505  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.818767  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.318476  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.318919  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.818707  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.819030  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:30.264789  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:30.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:30.329449  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:30.329484  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.329503  475694 retry.go:31] will retry after 18.349811318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.819293  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.318473  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.318550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.318884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.818872  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.818940  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:32.319072  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.319152  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.319497  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:32.319550  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:32.819264  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.319325  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.319391  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.319698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.318569  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.318644  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.318965  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:34.819051  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:35.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:35.818450  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.818528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.318610  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.318679  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.318948  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.818786  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.818871  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.819206  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:36.819259  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:37.318997  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.319078  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.319374  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:37.819133  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.819207  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.819482  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.319323  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.319397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.319736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.818843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:39.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.318474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.318729  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:39.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:39.818457  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.318693  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.319014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.818414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.818482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:41.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.318542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:41.318917  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:41.819023  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.819096  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.319088  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.319177  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.319455  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.819310  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.819387  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:43.818851  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:44.318448  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.318869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:44.818424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.818501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.318828  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.318911  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.319336  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.819228  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.819306  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.819658  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:45.819718  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:46.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:46.818660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.318699  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.318774  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.319086  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.818830  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:48.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:48.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:48.679520  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:48.741510  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:48.741587  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.741616  475694 retry.go:31] will retry after 29.090794722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.818706  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.818780  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.819102  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.318810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.818809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:50.034416  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:50.096674  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:50.100468  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.100502  475694 retry.go:31] will retry after 39.426681546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.318852  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.318933  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.319214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:50.319264  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:50.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.819159  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.819546  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.319385  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.319643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.818732  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.819127  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.318819  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.318894  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.319218  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.818982  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.819057  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:52.819370  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:53.319110  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.319188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.319511  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:53.819108  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.819188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.819533  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.319331  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.319714  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.818392  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.818470  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.818795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:55.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:55.318874  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:55.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.818691  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.818767  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.819103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.318465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.819813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:57.819868  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:58.318364  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:58.819413  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.819488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.318514  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.818497  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.818583  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.818942  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:00.327892  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.327986  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.328316  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:00.328364  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:00.819096  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.819170  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.319773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.818911  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.819294  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.319118  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.319418  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.819093  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.819166  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.819505  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:02.819563  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:03.319107  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.319185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.319442  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:03.819184  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.819264  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.819590  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.319286  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.319688  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.818746  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:05.318450  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.318528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.318837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:05.318887  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:05.818417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.818534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.318784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.818697  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.818768  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.819055  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:07.319228  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.319300  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.319611  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:07.319663  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:07.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.318858  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.818509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.318534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.318615  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:09.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:10.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:10.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.818634  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.818898  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.818867  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.818943  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.819292  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:11.819345  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:12.319080  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.319411  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:12.819163  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.819597  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.319410  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.319484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.818534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.818607  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:14.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.318819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:14.318867  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:14.818523  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.318504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.818863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.318822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.818718  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.818992  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:16.819042  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:17.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.818831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.833208  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:31:17.902395  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906323  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906439  475694 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:18.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:18.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:19.318591  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.319009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:19.319064  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:19.818355  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.818687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.319419  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.319793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.318502  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.318570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.819058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.819153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.819506  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:21.819565  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:22.319388  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.319835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:22.819358  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.819430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.318411  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:24.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.318789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:24.318840  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:24.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.818448  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.818741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.818645  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.818926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.318381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.318457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.318786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.818755  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.818868  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.819189  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:26.819243  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:27.318982  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.319054  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.319361  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:27.819127  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.819233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.319236  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.819411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:28.819786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:29.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:29.528240  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:31:29.598877  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598918  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598995  475694 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:29.602136  475694 out.go:179] * Enabled addons: 
	I1216 04:31:29.604114  475694 addons.go:530] duration metric: took 1m47.775177414s for enable addons: enabled=[]
	I1216 04:31:29.818770  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.818886  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.819272  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.319147  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.319404  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.819315  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.819674  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:31.318340  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.318743  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:31.318800  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:31.818902  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.819227  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.319058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.319135  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.319508  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.819330  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.819408  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:33.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:33.318863  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:33.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.818785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.318363  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.318438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.318790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.819369  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.819438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.819713  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:35.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.318872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:35.318943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:35.818615  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.818692  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.819009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.818925  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.819003  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:37.319361  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.319459  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.319790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:37.319835  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:37.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.818525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.318529  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.318609  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.318895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:39.818858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:40.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.318507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:40.818376  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.818450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.818707  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.318824  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.319203  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.819296  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.819635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:41.819695  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:42.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.319800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:42.818819  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.818916  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.819270  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.319056  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.319132  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.319459  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.819240  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.819310  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.819650  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:44.319420  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.319840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:44.319896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:44.818558  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.818637  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.818980  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.318674  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.319042  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.818761  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.818837  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.819095  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:46.819145  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:47.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.318515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.318857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:47.818554  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.818627  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.818943  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.318744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.818844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:49.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.318533  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:49.318926  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:49.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.818832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.318907  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.818617  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.818699  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.819034  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:51.318728  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.318799  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.319084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:51.319133  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:51.819260  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.819337  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.819646  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.319367  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.319460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.319796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.818735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.818542  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:53.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:54.318422  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.318498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:54.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.318417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.318540  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:56.318390  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.318813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:56.318866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:56.818718  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.818805  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.819146  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.318413  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.318738  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.818490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:58.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.319808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:58.319866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:58.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.818811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.818868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.318393  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:00.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:01.318351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.318435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:01.818924  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.819034  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.819306  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.319090  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.819160  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.819573  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:02.819634  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.318419  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:03.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.818850  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.818766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:05.318488  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.318585  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.318952  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:05.319013  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:05.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.818766  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.318837  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.318913  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.319181  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.819150  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.819232  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.819586  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:07.319256  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.319343  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.319687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:07.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:07.819375  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.819717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.818488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.818845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.318495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.818410  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.818492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.818839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:09.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:10.318578  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.318664  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.319047  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:10.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.818852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.819114  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.819011  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.819097  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.819452  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:11.819512  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:12.319053  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.319128  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.319419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:12.819173  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.819252  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.819584  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.319194  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.319275  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.319589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.819219  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.819286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.819552  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:13.819595  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:14.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.319816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:14.818518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.818951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:16.318368  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.318450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:16.318842  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:16.818645  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.818981  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.318766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.818482  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:18.318566  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.318640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.318945  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:18.319006  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:18.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.818516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.818842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.318516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.819341  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.819415  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.819722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.318801  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.818494  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:20.818924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:21.318563  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.318632  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.318896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:21.818868  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.818945  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.819262  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.318832  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.318939  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.319249  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.818805  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.818880  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.819174  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:22.819224  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:23.318762  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.318839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.319185  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:23.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.819074  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.819390  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.319208  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.319468  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.819344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:24.819813  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:25.318436  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.318844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:25.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.818705  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.818789  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.819111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:27.318783  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.318852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.319112  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:27.319155  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:27.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.818848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.318875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.818822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.318518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.318617  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:29.819143  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:30.318804  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.318881  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:30.818908  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.819365  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.319128  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.319211  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.319551  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.818625  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.819005  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:32.318377  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.318779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:32.318830  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:32.818478  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.818890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.818487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:34.318540  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.318621  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.318936  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:34.318997  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:34.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.818510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.818779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.818530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.818878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:36.318556  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.318986  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:36.319033  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:36.818822  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.818905  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.819233  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.319068  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.319154  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.319493  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.819197  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.819270  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.819602  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:38.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.319452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:38.319827  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:38.818447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.818527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.318937  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.818731  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.819050  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.318767  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.319183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.818948  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.819022  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.819278  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:40.819323  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:41.319042  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.319117  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.319435  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:41.818623  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.818705  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.819037  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.818437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:43.318459  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.318887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:43.318945  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:43.819357  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.819431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.819742  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.318455  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.818656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:45.318689  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.318765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:45.319110  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:45.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.318349  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.818690  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.818773  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.819032  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.818472  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.818551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:47.818986  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:48.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.319715  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:48.818461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.319434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.819351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.819434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.819700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:49.819743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:50.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.318483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:50.818463  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.818546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.318426  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.318508  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.818955  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.819039  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.819431  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:52.319209  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.319637  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:52.319692  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:52.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.818449  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.818711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.318405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:54.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.319718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:54.319768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:54.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.818896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.318601  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.318680  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.818723  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.818804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.819074  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.318436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.818730  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.818807  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.819167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:56.819227  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:57.318894  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.318969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.319232  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:57.818968  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.819399  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.319214  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.319634  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.819672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:58.819714  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:59.318342  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.318420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:59.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.818911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.319047  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.319356  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.819156  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.819244  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.819576  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:01.319425  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.319520  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.319865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:01.319922  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:01.818853  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.818926  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.319032  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.319108  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.319434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.819246  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.819327  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.319320  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.319661  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.818365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.818761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:03.818823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:04.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.318596  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.318928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:04.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.818807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:05.818960  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:06.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.318787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:06.818785  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.818857  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.819145  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.318817  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.318891  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.818978  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.819056  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.819319  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:07.819368  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:08.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.319217  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.319580  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:08.819296  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:10.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:10.318924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:10.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.818479  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.818769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.318449  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.819090  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:12.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.319221  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.319548  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:12.319601  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:12.819365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.819440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.819754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.318386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.318466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.819148  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.819223  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.819475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:14.319233  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.319642  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:14.319694  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:14.819321  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.819398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.819744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.318773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.318633  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.318712  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.818735  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.819070  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:16.819115  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:17.318776  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.318851  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.319191  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:17.818960  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.819386  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.319157  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.819267  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.819339  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.819652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:18.819699  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:19.319379  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.319785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:19.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.818428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.318802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.818454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.818885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:21.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.318772  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:21.318812  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:21.818938  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.819385  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.319177  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.319262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.319560  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.819291  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.819372  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.819640  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:23.319349  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.319428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.319751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:23.319801  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:23.818459  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.818792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.318944  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.818409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.818745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:25.818786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:26.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:26.818684  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.818758  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.318750  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.318819  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.319109  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.818989  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.819067  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.819405  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:27.819467  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:28.319223  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.319304  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.319635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:28.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.319818  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.818393  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.818474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:30.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.318409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.318735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:30.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.818923  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.818850  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.818935  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:32.319014  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.319087  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.319396  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:32.319454  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:32.819204  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.819281  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.318343  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.318673  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.818346  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.318496  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.318588  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.318954  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.818515  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.818592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.818900  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:34.818954  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:35.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.318547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:35.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.319305  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.319382  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.818685  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.819006  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:36.819059  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:37.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.319017  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:37.819328  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.819394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.819638  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.818700  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.819026  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:38.819078  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:39.318710  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.318778  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.319044  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:39.819409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.819485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.819829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.818412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.818486  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:41.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:41.318906  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:41.819057  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.319351  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.319425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.319803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.818567  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.818642  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.818971  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:43.318717  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.318804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:43.319246  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:43.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.819063  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.318768  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.819022  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.819099  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.819428  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:45.319171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.319254  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.319544  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:45.319590  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:45.819402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.819848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.318583  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.319025  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.818846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.819141  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.318533  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.318611  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.318979  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:47.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:48.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.318751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:48.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.318697  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.319111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.818787  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.818863  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.819129  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:49.819172  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:50.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:50.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.818682  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.819022  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.318712  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.319167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.819070  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.819144  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.819478  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:51.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:52.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.319323  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.319652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:52.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.819441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.318783  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.818549  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:54.319385  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:54.319744  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:54.818347  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.818422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.818747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.318483  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.318963  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.818650  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.818724  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.819014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.318842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.818765  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.818843  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:56.819280  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:57.318987  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.319070  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.319350  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:57.819171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.819249  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.319778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.819329  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.819413  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:58.819797  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.318517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:59.818440  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.818866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.328767  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.328849  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.329179  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.819012  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.819093  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.819419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:01.319182  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.319271  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.319631  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:01.319685  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:01.818692  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.818765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.819031  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.318365  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.318443  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.818471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.818800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.318678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.818431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:03.818824  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:04.319347  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.319423  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:04.818540  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.818608  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.818855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.318534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.318911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.818570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.818899  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:05.818957  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:06.319348  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.319422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.319689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:06.818777  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.818855  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.819214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.319101  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.319438  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.819176  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.819248  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.819494  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:07.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:08.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.319324  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.319660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:08.819334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.819414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.819748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:10.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.318882  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:10.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:10.819332  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.819407  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.318447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.318755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.819077  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.819410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:12.319155  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.319475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:12.319519  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:12.819271  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.819346  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.819689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.318793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.818559  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.818826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.318885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.818639  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.818968  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:14.819021  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:15.318668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.319003  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:15.818382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.318521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.818753  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.818825  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.819126  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:16.819186  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:17.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.318558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:17.818418  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.818784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.818343  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.818802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:19.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:19.318915  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:19.818577  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.818646  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.818927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.318833  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:21.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.319702  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:21.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:21.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.819068  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.819437  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.319208  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.319613  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.819318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.819390  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.819643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.318762  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:23.818927  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:24.318334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.318402  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.318670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:24.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.818790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.318379  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.318455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.818514  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.818579  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:26.318398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:26.318858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:26.818668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.818748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.819069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.319709  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.818413  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:28.318554  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.318951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:28.319002  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:28.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.818493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.818750  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.319469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.319795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.818453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.818867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:30.319342  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.319416  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:30.319711  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:30.818394  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.818849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.818933  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.819001  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.819258  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.319093  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.819401  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:32.819825  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:33.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:33.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.818536  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.318539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.318890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.818406  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:35.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.318826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:35.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:35.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.818828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.318761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.818896  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.819296  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:37.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.318532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.318915  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:37.318974  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:37.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.818687  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.818946  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.318430  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.818653  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.818976  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:39.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.319717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:39.319766  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:39.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.819802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.319399  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.319478  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.319815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.819720  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.318432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.818913  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.818984  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.819332  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:41.819390  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:42.319148  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.319222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.319522  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:42.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.819397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.819739  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.319081  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.818751  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:44.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:44.318860  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:44.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.818518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.818902  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.319407  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.319489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.319845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.818371  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.818447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:46.318536  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.318974  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:46.319036  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:46.818922  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.819000  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.819277  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.319079  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.319486  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.819266  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:48.319327  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.319723  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:48.319773  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:48.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.818771  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.318493  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.318566  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.818551  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.318400  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.818522  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.818600  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.818928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:50.818980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:51.318625  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.318702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.319079  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:51.819046  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.819123  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.319344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.319417  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.319779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.818421  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.818829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:53.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:53.318897  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:53.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.818506  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.319276  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.319352  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.319592  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.819372  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.819794  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.318383  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.318468  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.818467  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.818798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:55.818839  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:56.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.318799  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:56.818695  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.818770  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.819054  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.318729  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.318810  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.319103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:57.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:58.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:58.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.818756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.318870  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.818542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.818859  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:59.818914  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:00.326681  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.327158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.327589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:00.818334  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.818414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.318487  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.318573  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.818952  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.819285  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:01.819326  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:02.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.319233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.319559  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:02.819407  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.819477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.819810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.318360  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.318434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.318682  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.818556  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.818922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:04.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:04.318896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:04.818557  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.818626  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.818950  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.818589  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.818665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.818795  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.818876  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.819216  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:06.819271  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:07.319075  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.319158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.319501  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:07.819216  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.819290  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.819547  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.319297  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.319373  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.319684  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.819455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.819785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:08.819836  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:09.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:09.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.318414  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.318493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.318815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.818498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.818758  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:11.318468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.318548  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:11.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:11.818967  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.819040  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.819370  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.318986  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.319065  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.319377  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.819142  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.819222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.819598  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:13.319413  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.319864  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:13.319929  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:13.818336  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.818409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.818500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.818819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.318797  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:15.818967  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:16.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.318843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:16.818768  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.819094  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.818465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:18.318479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.318546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:18.318849  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:18.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.818776  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.318510  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.318592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.818621  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.818973  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:20.318397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:20.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:20.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.818507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.318544  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.318656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.819053  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.819131  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.819472  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:22.319274  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.319345  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.319672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:22.319728  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:22.818396  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.818467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.318836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.818345  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.818420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.818765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.319370  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.319441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.818553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:24.818962  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:25.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.318430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:25.819340  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.819694  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.319409  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.319480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.319786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.818711  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:26.819158  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:27.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.318803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:27.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.818881  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.318434  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.318510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.318832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.818494  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.818812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:29.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:29.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:29.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.818455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.319321  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.319394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.818400  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.818475  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.818821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:31.318538  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.318610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.318926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:31.318982  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:31.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.819402  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.319162  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.319242  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.319568  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.819397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.819471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.819805  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.318749  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.818824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:33.818882  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:34.318388  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.318473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.318868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:34.819425  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.819500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.819756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.318883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.818457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:36.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.319450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.319711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:36.319751  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:36.818719  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.818823  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.819149  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.318863  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.318957  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.319340  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.819103  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.819178  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.819440  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.318528  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.318602  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.318927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:38.818930  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:39.318332  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.318414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.318736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:39.818477  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.818480  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.818825  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:41.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.318879  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:41.318931  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:41.818408  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.319418  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.319504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:42.319849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.818432  475694 node_ready.go:38] duration metric: took 6m0.000197669s for node "functional-763073" to be "Ready" ...
	I1216 04:35:42.821511  475694 out.go:203] 
	W1216 04:35:42.824400  475694 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:35:42.824420  475694 out.go:285] * 
	W1216 04:35:42.826578  475694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:35:42.829442  475694 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805841333Z" level=info msg="Using the internal default seccomp profile"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805849087Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805855709Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805861575Z" level=info msg="RDT not available in the host system"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805877075Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.806774454Z" level=info msg="Conmon does support the --sync option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.806802081Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.806817876Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.807618582Z" level=info msg="Conmon does support the --sync option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.807656342Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.807829275Z" level=info msg="Updated default CNI network name to "
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.808388764Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.808772563Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.808831681Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.846853056Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.846899998Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.846964187Z" level=info msg="Create NRI interface"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.84713593Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847160546Z" level=info msg="runtime interface created"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847179369Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.84718654Z" level=info msg="runtime interface starting up..."
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847193703Z" level=info msg="starting plugins..."
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847212165Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847303653Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:29:39 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:35:44.763486    8636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:44.763993    8636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:44.765804    8636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:44.766156    8636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:44.767673    8636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:35:44 up  3:18,  0 user,  load average: 0.72, 0.37, 0.81
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:35:42 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:43 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 16 04:35:43 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:43 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:43 functional-763073 kubelet[8527]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:43 functional-763073 kubelet[8527]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:43 functional-763073 kubelet[8527]: E1216 04:35:43.137132    8527 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:43 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:43 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:43 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 16 04:35:43 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:43 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:43 functional-763073 kubelet[8548]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:43 functional-763073 kubelet[8548]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:43 functional-763073 kubelet[8548]: E1216 04:35:43.878410    8548 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:43 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:43 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:44 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 16 04:35:44 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:44 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:44 functional-763073 kubelet[8607]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:44 functional-763073 kubelet[8607]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:44 functional-763073 kubelet[8607]: E1216 04:35:44.643758    8607 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:44 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:44 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (395.161918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-763073 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-763073 get po -A: exit status 1 (73.501795ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-763073 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-763073 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-763073 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (311.883949ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 logs -n 25: (1.060607434s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-861171 addons list                                                                                                                     │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ addons         │ functional-861171 addons list -o json                                                                                                             │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ service        │ functional-861171 service hello-node-connect --url                                                                                                │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:20 UTC │
	│ start          │ -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ start          │ -p functional-861171 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                   │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ start          │ -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                         │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-861171 --alsologtostderr -v=1                                                                                    │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:20 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service list                                                                                                                    │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service list -o json                                                                                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service --namespace=default --https --url hello-node                                                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service hello-node --url --format={{.IP}}                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ service        │ functional-861171 service hello-node --url                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format short --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format yaml --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ ssh            │ functional-861171 ssh pgrep buildkitd                                                                                                             │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ image          │ functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format json --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format table --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls                                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ delete         │ -p functional-861171                                                                                                                              │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ start          │ -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ start          │ -p functional-763073 --alsologtostderr -v=8                                                                                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:29:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:29:36.794313  475694 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:29:36.794434  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794446  475694 out.go:374] Setting ErrFile to fd 2...
	I1216 04:29:36.794452  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794700  475694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:29:36.795091  475694 out.go:368] Setting JSON to false
	I1216 04:29:36.795948  475694 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11523,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:29:36.796022  475694 start.go:143] virtualization:  
	I1216 04:29:36.799564  475694 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:29:36.803377  475694 notify.go:221] Checking for updates...
	I1216 04:29:36.806471  475694 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:29:36.809418  475694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:29:36.812382  475694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:36.815368  475694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:29:36.818384  475694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:29:36.821299  475694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:29:36.824780  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:36.824898  475694 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:29:36.853440  475694 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:29:36.853553  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.911081  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.901976085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.911198  475694 docker.go:319] overlay module found
	I1216 04:29:36.914378  475694 out.go:179] * Using the docker driver based on existing profile
	I1216 04:29:36.917157  475694 start.go:309] selected driver: docker
	I1216 04:29:36.917180  475694 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.917338  475694 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:29:36.917450  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.970986  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.961820507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.971442  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:36.971503  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:36.971553  475694 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.974751  475694 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:29:36.977516  475694 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:29:36.980431  475694 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:29:36.983493  475694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:29:36.983530  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:36.983585  475694 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:29:36.983595  475694 cache.go:65] Caching tarball of preloaded images
	I1216 04:29:36.983676  475694 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:29:36.983683  475694 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:29:36.983782  475694 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:29:37.009018  475694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:29:37.009047  475694 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:29:37.009096  475694 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:29:37.009136  475694 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:29:37.009247  475694 start.go:364] duration metric: took 82.708µs to acquireMachinesLock for "functional-763073"
	I1216 04:29:37.009287  475694 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:29:37.009293  475694 fix.go:54] fixHost starting: 
	I1216 04:29:37.009582  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:37.028726  475694 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:29:37.028764  475694 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:29:37.032201  475694 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:29:37.032251  475694 machine.go:94] provisionDockerMachine start ...
	I1216 04:29:37.032362  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.050328  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.050673  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.050689  475694 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:29:37.192783  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.192826  475694 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:29:37.192931  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.211313  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.211628  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.211639  475694 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:29:37.354192  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.354269  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.376898  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.377254  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.377278  475694 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:29:37.509279  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:29:37.509306  475694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:29:37.509326  475694 ubuntu.go:190] setting up certificates
	I1216 04:29:37.509346  475694 provision.go:84] configureAuth start
	I1216 04:29:37.509406  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:37.527206  475694 provision.go:143] copyHostCerts
	I1216 04:29:37.527264  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527308  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:29:37.527320  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527395  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:29:37.527487  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527509  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:29:37.527517  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527545  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:29:37.527594  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527615  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:29:37.527622  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527648  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:29:37.527699  475694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:29:37.800879  475694 provision.go:177] copyRemoteCerts
	I1216 04:29:37.800949  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:29:37.800990  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.823288  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:37.920869  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 04:29:37.920929  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:29:37.938521  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 04:29:37.938583  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:29:37.956377  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 04:29:37.956439  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:29:37.974119  475694 provision.go:87] duration metric: took 464.750518ms to configureAuth
	I1216 04:29:37.974148  475694 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:29:37.974331  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:37.974450  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.991914  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.992233  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.992254  475694 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:29:38.308392  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:29:38.308467  475694 machine.go:97] duration metric: took 1.27620546s to provisionDockerMachine
	I1216 04:29:38.308501  475694 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:29:38.308543  475694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:29:38.308636  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:29:38.308736  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.327973  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.425975  475694 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:29:38.429465  475694 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:29:38.429486  475694 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:29:38.429491  475694 command_runner.go:130] > VERSION_ID="12"
	I1216 04:29:38.429495  475694 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:29:38.429500  475694 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:29:38.429503  475694 command_runner.go:130] > ID=debian
	I1216 04:29:38.429508  475694 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:29:38.429575  475694 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:29:38.429584  475694 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:29:38.429642  475694 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:29:38.429664  475694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:29:38.429675  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:29:38.429740  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:29:38.429824  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:29:38.429840  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /etc/ssl/certs/4417272.pem
	I1216 04:29:38.429918  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:29:38.429926  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> /etc/test/nested/copy/441727/hosts
	I1216 04:29:38.429973  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:29:38.438164  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:38.456472  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:29:38.474815  475694 start.go:296] duration metric: took 166.27897ms for postStartSetup
	I1216 04:29:38.474942  475694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:29:38.475008  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.493257  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.586186  475694 command_runner.go:130] > 13%
	I1216 04:29:38.586744  475694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:29:38.591214  475694 command_runner.go:130] > 169G
	I1216 04:29:38.591631  475694 fix.go:56] duration metric: took 1.582334669s for fixHost
	I1216 04:29:38.591655  475694 start.go:83] releasing machines lock for "functional-763073", held for 1.582392532s
	I1216 04:29:38.591756  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:38.610497  475694 ssh_runner.go:195] Run: cat /version.json
	I1216 04:29:38.610580  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.610804  475694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:29:38.610862  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.644780  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.648235  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.740654  475694 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 04:29:38.740792  475694 ssh_runner.go:195] Run: systemctl --version
	I1216 04:29:38.835621  475694 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 04:29:38.838633  475694 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:29:38.838716  475694 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:29:38.838811  475694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:29:38.876422  475694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:29:38.880827  475694 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:29:38.881001  475694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:29:38.881102  475694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:29:38.888966  475694 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:29:38.888992  475694 start.go:496] detecting cgroup driver to use...
	I1216 04:29:38.889023  475694 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:29:38.889116  475694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:29:38.904919  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:29:38.918230  475694 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:29:38.918296  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:29:38.934386  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:29:38.947903  475694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:29:39.064725  475694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:29:39.186461  475694 docker.go:234] disabling docker service ...
	I1216 04:29:39.186555  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:29:39.201259  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:29:39.214213  475694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:29:39.331697  475694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:29:39.468929  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:29:39.481743  475694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:29:39.494008  475694 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 04:29:39.494807  475694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:29:39.494889  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.503668  475694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:29:39.503751  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.513027  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.521738  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.530476  475694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:29:39.538796  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.547730  475694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.556341  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.565046  475694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:29:39.571643  475694 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:29:39.572565  475694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:29:39.579896  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:39.695396  475694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:29:39.852818  475694 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:29:39.852930  475694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:29:39.856967  475694 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 04:29:39.856989  475694 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:29:39.856996  475694 command_runner.go:130] > Device: 0,72	Inode: 1641        Links: 1
	I1216 04:29:39.857013  475694 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:39.857019  475694 command_runner.go:130] > Access: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857028  475694 command_runner.go:130] > Modify: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857036  475694 command_runner.go:130] > Change: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857040  475694 command_runner.go:130] >  Birth: -
	I1216 04:29:39.857332  475694 start.go:564] Will wait 60s for crictl version
	I1216 04:29:39.857393  475694 ssh_runner.go:195] Run: which crictl
	I1216 04:29:39.860635  475694 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:29:39.860907  475694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:29:39.883882  475694 command_runner.go:130] > Version:  0.1.0
	I1216 04:29:39.883905  475694 command_runner.go:130] > RuntimeName:  cri-o
	I1216 04:29:39.883910  475694 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 04:29:39.883916  475694 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:29:39.886266  475694 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:29:39.886355  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.912976  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.913004  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.913011  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.913016  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.913021  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.913026  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.913030  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.913034  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.913044  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.913048  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.913052  475694 command_runner.go:130] >      static
	I1216 04:29:39.913055  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.913059  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.913089  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.913094  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.913097  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.913101  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.913104  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.913108  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.913112  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.915574  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.945490  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.945513  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.945520  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.945525  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.945530  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.945534  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.945538  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.945543  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.945548  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.945551  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.945557  475694 command_runner.go:130] >      static
	I1216 04:29:39.945561  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.945587  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.945594  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.945598  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.945601  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.945607  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.945617  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.945623  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.945639  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.952832  475694 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:29:39.955738  475694 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:29:39.972578  475694 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:29:39.976813  475694 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 04:29:39.976940  475694 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:29:39.977085  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:39.977157  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.026676  475694 command_runner.go:130] > {
	I1216 04:29:40.026700  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.026707  475694 command_runner.go:130] >     {
	I1216 04:29:40.026715  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.026721  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026727  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.026731  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026736  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026745  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.026758  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.026762  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026770  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.026775  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026789  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026796  475694 command_runner.go:130] >     },
	I1216 04:29:40.026800  475694 command_runner.go:130] >     {
	I1216 04:29:40.026807  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.026815  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026820  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.026827  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026831  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026843  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.026852  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.026859  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026863  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.026867  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026879  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026883  475694 command_runner.go:130] >     },
	I1216 04:29:40.026895  475694 command_runner.go:130] >     {
	I1216 04:29:40.026906  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.026917  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026927  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.026930  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026934  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026942  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.026954  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.026962  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026966  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.026974  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.026979  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026985  475694 command_runner.go:130] >     },
	I1216 04:29:40.026988  475694 command_runner.go:130] >     {
	I1216 04:29:40.026995  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.027002  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027012  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.027019  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027023  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027031  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.027041  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.027047  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027052  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.027058  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027063  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027070  475694 command_runner.go:130] >       },
	I1216 04:29:40.027084  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027096  475694 command_runner.go:130] >     },
	I1216 04:29:40.027100  475694 command_runner.go:130] >     {
	I1216 04:29:40.027106  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.027114  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027119  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.027129  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027138  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027146  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.027157  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.027161  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027168  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.027171  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027175  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027183  475694 command_runner.go:130] >       },
	I1216 04:29:40.027187  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027192  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027200  475694 command_runner.go:130] >     },
	I1216 04:29:40.027203  475694 command_runner.go:130] >     {
	I1216 04:29:40.027214  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.027229  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027235  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.027241  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027245  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027254  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.027266  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.027269  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027278  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.027281  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027288  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027292  475694 command_runner.go:130] >       },
	I1216 04:29:40.027300  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027305  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027311  475694 command_runner.go:130] >     },
	I1216 04:29:40.027314  475694 command_runner.go:130] >     {
	I1216 04:29:40.027320  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.027324  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027333  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.027337  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027345  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027357  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.027366  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.027372  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027376  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.027384  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027389  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027395  475694 command_runner.go:130] >     },
	I1216 04:29:40.027399  475694 command_runner.go:130] >     {
	I1216 04:29:40.027405  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.027409  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027423  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.027430  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027434  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027442  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.027466  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.027473  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027478  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.027485  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027489  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027492  475694 command_runner.go:130] >       },
	I1216 04:29:40.027498  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027507  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027514  475694 command_runner.go:130] >     },
	I1216 04:29:40.027517  475694 command_runner.go:130] >     {
	I1216 04:29:40.027524  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.027531  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027536  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.027542  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027547  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027557  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.027568  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.027573  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027586  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.027593  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027598  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.027601  475694 command_runner.go:130] >       },
	I1216 04:29:40.027610  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027614  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.027620  475694 command_runner.go:130] >     }
	I1216 04:29:40.027623  475694 command_runner.go:130] >   ]
	I1216 04:29:40.027626  475694 command_runner.go:130] > }
	I1216 04:29:40.029894  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.029927  475694 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:29:40.029987  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.055653  475694 command_runner.go:130] > {
	I1216 04:29:40.055673  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.055678  475694 command_runner.go:130] >     {
	I1216 04:29:40.055687  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.055692  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055697  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.055701  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055705  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055715  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.055724  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.055728  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055732  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.055736  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055740  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055744  475694 command_runner.go:130] >     },
	I1216 04:29:40.055747  475694 command_runner.go:130] >     {
	I1216 04:29:40.055753  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.055757  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055762  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.055765  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055769  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055787  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.055795  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.055798  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055802  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.055806  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055817  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055820  475694 command_runner.go:130] >     },
	I1216 04:29:40.055824  475694 command_runner.go:130] >     {
	I1216 04:29:40.055830  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.055833  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055838  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.055841  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055845  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055854  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.055862  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.055865  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055869  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.055873  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.055876  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055879  475694 command_runner.go:130] >     },
	I1216 04:29:40.055882  475694 command_runner.go:130] >     {
	I1216 04:29:40.055891  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.055894  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055899  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.055904  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055908  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055915  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.055923  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.055926  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055929  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.055933  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.055937  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.055940  475694 command_runner.go:130] >       },
	I1216 04:29:40.055952  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055956  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055959  475694 command_runner.go:130] >     },
	I1216 04:29:40.055961  475694 command_runner.go:130] >     {
	I1216 04:29:40.055968  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.055971  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055976  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.055979  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055983  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055990  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.055998  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.056001  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056005  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.056008  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056012  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056015  475694 command_runner.go:130] >       },
	I1216 04:29:40.056018  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056022  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056024  475694 command_runner.go:130] >     },
	I1216 04:29:40.056027  475694 command_runner.go:130] >     {
	I1216 04:29:40.056033  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.056037  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056043  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.056045  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056049  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056057  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.056065  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.056068  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056072  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.056075  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056079  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056082  475694 command_runner.go:130] >       },
	I1216 04:29:40.056085  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056096  475694 command_runner.go:130] >     },
	I1216 04:29:40.056099  475694 command_runner.go:130] >     {
	I1216 04:29:40.056106  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.056110  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056115  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.056118  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056122  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056130  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.056137  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.056141  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056144  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.056148  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056152  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056155  475694 command_runner.go:130] >     },
	I1216 04:29:40.056158  475694 command_runner.go:130] >     {
	I1216 04:29:40.056164  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.056168  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056173  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.056176  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056180  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056188  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.056204  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.056207  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056211  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.056215  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056218  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056221  475694 command_runner.go:130] >       },
	I1216 04:29:40.056225  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056228  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056231  475694 command_runner.go:130] >     },
	I1216 04:29:40.056233  475694 command_runner.go:130] >     {
	I1216 04:29:40.056240  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.056247  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056251  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.056255  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056259  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056266  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.056278  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.056281  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056285  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.056289  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056293  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.056296  475694 command_runner.go:130] >       },
	I1216 04:29:40.056299  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056303  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.056305  475694 command_runner.go:130] >     }
	I1216 04:29:40.056308  475694 command_runner.go:130] >   ]
	I1216 04:29:40.056312  475694 command_runner.go:130] > }
	I1216 04:29:40.057842  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.057866  475694 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:29:40.057874  475694 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:29:40.058028  475694 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:29:40.058117  475694 ssh_runner.go:195] Run: crio config
	I1216 04:29:40.108801  475694 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 04:29:40.108825  475694 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 04:29:40.108833  475694 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 04:29:40.108837  475694 command_runner.go:130] > #
	I1216 04:29:40.108844  475694 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 04:29:40.108850  475694 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 04:29:40.108857  475694 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 04:29:40.108874  475694 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 04:29:40.108891  475694 command_runner.go:130] > # reload'.
	I1216 04:29:40.108898  475694 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 04:29:40.108905  475694 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 04:29:40.108915  475694 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 04:29:40.108922  475694 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 04:29:40.108925  475694 command_runner.go:130] > [crio]
	I1216 04:29:40.108932  475694 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 04:29:40.108939  475694 command_runner.go:130] > # containers images, in this directory.
	I1216 04:29:40.109485  475694 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 04:29:40.109505  475694 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 04:29:40.110050  475694 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 04:29:40.110069  475694 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 04:29:40.110418  475694 command_runner.go:130] > # imagestore = ""
	I1216 04:29:40.110434  475694 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 04:29:40.110442  475694 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 04:29:40.110623  475694 command_runner.go:130] > # storage_driver = "overlay"
	I1216 04:29:40.110671  475694 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 04:29:40.110692  475694 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 04:29:40.110809  475694 command_runner.go:130] > # storage_option = [
	I1216 04:29:40.110816  475694 command_runner.go:130] > # ]
	I1216 04:29:40.110824  475694 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 04:29:40.110831  475694 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 04:29:40.110973  475694 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 04:29:40.110983  475694 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 04:29:40.111015  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 04:29:40.111021  475694 command_runner.go:130] > # always happen on a node reboot
	I1216 04:29:40.111194  475694 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 04:29:40.111214  475694 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 04:29:40.111221  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 04:29:40.111260  475694 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 04:29:40.111402  475694 command_runner.go:130] > # version_file_persist = ""
	I1216 04:29:40.111414  475694 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 04:29:40.111423  475694 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 04:29:40.111428  475694 command_runner.go:130] > # internal_wipe = true
	I1216 04:29:40.111436  475694 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 04:29:40.111471  475694 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 04:29:40.111604  475694 command_runner.go:130] > # internal_repair = true
	I1216 04:29:40.111614  475694 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 04:29:40.111621  475694 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 04:29:40.111626  475694 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 04:29:40.111750  475694 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 04:29:40.111761  475694 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 04:29:40.111764  475694 command_runner.go:130] > [crio.api]
	I1216 04:29:40.111770  475694 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 04:29:40.111973  475694 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 04:29:40.111983  475694 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 04:29:40.112123  475694 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 04:29:40.112134  475694 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 04:29:40.112139  475694 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 04:29:40.112334  475694 command_runner.go:130] > # stream_port = "0"
	I1216 04:29:40.112344  475694 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 04:29:40.112496  475694 command_runner.go:130] > # stream_enable_tls = false
	I1216 04:29:40.112506  475694 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 04:29:40.112646  475694 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 04:29:40.112658  475694 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 04:29:40.112664  475694 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112790  475694 command_runner.go:130] > # stream_tls_cert = ""
	I1216 04:29:40.112800  475694 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 04:29:40.112806  475694 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112930  475694 command_runner.go:130] > # stream_tls_key = ""
	I1216 04:29:40.112940  475694 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 04:29:40.112947  475694 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 04:29:40.112956  475694 command_runner.go:130] > # automatically pick up the changes.
	I1216 04:29:40.113120  475694 command_runner.go:130] > # stream_tls_ca = ""
	I1216 04:29:40.113148  475694 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113407  475694 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 04:29:40.113455  475694 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113595  475694 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 04:29:40.113624  475694 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 04:29:40.113657  475694 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 04:29:40.113680  475694 command_runner.go:130] > [crio.runtime]
	I1216 04:29:40.113702  475694 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 04:29:40.113736  475694 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 04:29:40.113757  475694 command_runner.go:130] > # "nofile=1024:2048"
	I1216 04:29:40.113777  475694 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 04:29:40.113795  475694 command_runner.go:130] > # default_ulimits = [
	I1216 04:29:40.113822  475694 command_runner.go:130] > # ]
	I1216 04:29:40.113845  475694 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 04:29:40.113998  475694 command_runner.go:130] > # no_pivot = false
	I1216 04:29:40.114026  475694 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 04:29:40.114058  475694 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 04:29:40.114076  475694 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 04:29:40.114109  475694 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 04:29:40.114138  475694 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 04:29:40.114159  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114189  475694 command_runner.go:130] > # conmon = ""
	I1216 04:29:40.114211  475694 command_runner.go:130] > # Cgroup setting for conmon
	I1216 04:29:40.114233  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 04:29:40.114382  475694 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 04:29:40.114414  475694 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 04:29:40.114449  475694 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 04:29:40.114469  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114514  475694 command_runner.go:130] > # conmon_env = [
	I1216 04:29:40.114538  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114560  475694 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 04:29:40.114591  475694 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 04:29:40.114614  475694 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 04:29:40.114632  475694 command_runner.go:130] > # default_env = [
	I1216 04:29:40.114649  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114679  475694 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 04:29:40.114706  475694 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 04:29:40.114884  475694 command_runner.go:130] > # selinux = false
	I1216 04:29:40.114896  475694 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 04:29:40.114903  475694 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 04:29:40.114909  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114913  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.114950  475694 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 04:29:40.114969  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114984  475694 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 04:29:40.115020  475694 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 04:29:40.115046  475694 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 04:29:40.115055  475694 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 04:29:40.115062  475694 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 04:29:40.115067  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115072  475694 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 04:29:40.115077  475694 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 04:29:40.115116  475694 command_runner.go:130] > # the cgroup blockio controller.
	I1216 04:29:40.115133  475694 command_runner.go:130] > # blockio_config_file = ""
	I1216 04:29:40.115175  475694 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 04:29:40.115196  475694 command_runner.go:130] > # blockio parameters.
	I1216 04:29:40.115214  475694 command_runner.go:130] > # blockio_reload = false
	I1216 04:29:40.115235  475694 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 04:29:40.115262  475694 command_runner.go:130] > # irqbalance daemon.
	I1216 04:29:40.115417  475694 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 04:29:40.115505  475694 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 04:29:40.115615  475694 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 04:29:40.115655  475694 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 04:29:40.115678  475694 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 04:29:40.115698  475694 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 04:29:40.115716  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115745  475694 command_runner.go:130] > # rdt_config_file = ""
	I1216 04:29:40.115769  475694 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 04:29:40.115788  475694 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 04:29:40.115822  475694 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 04:29:40.115844  475694 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 04:29:40.115864  475694 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 04:29:40.115884  475694 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 04:29:40.115919  475694 command_runner.go:130] > # will be added.
	I1216 04:29:40.115936  475694 command_runner.go:130] > # default_capabilities = [
	I1216 04:29:40.115952  475694 command_runner.go:130] > # 	"CHOWN",
	I1216 04:29:40.115983  475694 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 04:29:40.116006  475694 command_runner.go:130] > # 	"FSETID",
	I1216 04:29:40.116024  475694 command_runner.go:130] > # 	"FOWNER",
	I1216 04:29:40.116040  475694 command_runner.go:130] > # 	"SETGID",
	I1216 04:29:40.116070  475694 command_runner.go:130] > # 	"SETUID",
	I1216 04:29:40.116112  475694 command_runner.go:130] > # 	"SETPCAP",
	I1216 04:29:40.116150  475694 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 04:29:40.116170  475694 command_runner.go:130] > # 	"KILL",
	I1216 04:29:40.116187  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116209  475694 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 04:29:40.116243  475694 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 04:29:40.116264  475694 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 04:29:40.116284  475694 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 04:29:40.116316  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116336  475694 command_runner.go:130] > default_sysctls = [
	I1216 04:29:40.116352  475694 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 04:29:40.116370  475694 command_runner.go:130] > ]
	I1216 04:29:40.116402  475694 command_runner.go:130] > # List of devices on the host that a
	I1216 04:29:40.116430  475694 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 04:29:40.116449  475694 command_runner.go:130] > # allowed_devices = [
	I1216 04:29:40.116482  475694 command_runner.go:130] > # 	"/dev/fuse",
	I1216 04:29:40.116502  475694 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 04:29:40.116519  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116549  475694 command_runner.go:130] > # List of additional devices. specified as
	I1216 04:29:40.116842  475694 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 04:29:40.116898  475694 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 04:29:40.116921  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116950  475694 command_runner.go:130] > # additional_devices = [
	I1216 04:29:40.116977  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116996  475694 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 04:29:40.117028  475694 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 04:29:40.117054  475694 command_runner.go:130] > # 	"/etc/cdi",
	I1216 04:29:40.117101  475694 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 04:29:40.117118  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117139  475694 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 04:29:40.117174  475694 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 04:29:40.117193  475694 command_runner.go:130] > # Defaults to false.
	I1216 04:29:40.117222  475694 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 04:29:40.117264  475694 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 04:29:40.117284  475694 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 04:29:40.117301  475694 command_runner.go:130] > # hooks_dir = [
	I1216 04:29:40.117338  475694 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 04:29:40.117357  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117377  475694 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 04:29:40.117412  475694 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 04:29:40.117421  475694 command_runner.go:130] > # its default mounts from the following two files:
	I1216 04:29:40.117425  475694 command_runner.go:130] > #
	I1216 04:29:40.117432  475694 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 04:29:40.117438  475694 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 04:29:40.117444  475694 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 04:29:40.117447  475694 command_runner.go:130] > #
	I1216 04:29:40.117454  475694 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 04:29:40.117461  475694 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 04:29:40.117467  475694 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 04:29:40.117517  475694 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 04:29:40.117534  475694 command_runner.go:130] > #
	I1216 04:29:40.117567  475694 command_runner.go:130] > # default_mounts_file = ""
	I1216 04:29:40.117599  475694 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 04:29:40.117644  475694 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 04:29:40.117670  475694 command_runner.go:130] > # pids_limit = -1
	I1216 04:29:40.117691  475694 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 04:29:40.117725  475694 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 04:29:40.117753  475694 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 04:29:40.117773  475694 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 04:29:40.117806  475694 command_runner.go:130] > # log_size_max = -1
	I1216 04:29:40.117830  475694 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 04:29:40.117850  475694 command_runner.go:130] > # log_to_journald = false
	I1216 04:29:40.117889  475694 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 04:29:40.117908  475694 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 04:29:40.117927  475694 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 04:29:40.117963  475694 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 04:29:40.117992  475694 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 04:29:40.118011  475694 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 04:29:40.118045  475694 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 04:29:40.118064  475694 command_runner.go:130] > # read_only = false
	I1216 04:29:40.118085  475694 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 04:29:40.118118  475694 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 04:29:40.118145  475694 command_runner.go:130] > # live configuration reload.
	I1216 04:29:40.118163  475694 command_runner.go:130] > # log_level = "info"
	I1216 04:29:40.118200  475694 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 04:29:40.118229  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.118246  475694 command_runner.go:130] > # log_filter = ""
	I1216 04:29:40.118284  475694 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118305  475694 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 04:29:40.118324  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118360  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118379  475694 command_runner.go:130] > # uid_mappings = ""
	I1216 04:29:40.118400  475694 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118433  475694 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 04:29:40.118453  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118475  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118516  475694 command_runner.go:130] > # gid_mappings = ""
	I1216 04:29:40.118547  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 04:29:40.118581  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118608  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.118630  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118663  475694 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 04:29:40.118694  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 04:29:40.118716  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118867  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.119059  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.119080  475694 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 04:29:40.119119  475694 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 04:29:40.119149  475694 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 04:29:40.119169  475694 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 04:29:40.119206  475694 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 04:29:40.119228  475694 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 04:29:40.119249  475694 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 04:29:40.119286  475694 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 04:29:40.119304  475694 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 04:29:40.119323  475694 command_runner.go:130] > # drop_infra_ctr = true
	I1216 04:29:40.119357  475694 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 04:29:40.119378  475694 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 04:29:40.119425  475694 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 04:29:40.119453  475694 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 04:29:40.119476  475694 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 04:29:40.119511  475694 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 04:29:40.119541  475694 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 04:29:40.119560  475694 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 04:29:40.119590  475694 command_runner.go:130] > # shared_cpuset = ""
	I1216 04:29:40.119612  475694 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 04:29:40.119632  475694 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 04:29:40.119663  475694 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 04:29:40.119695  475694 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 04:29:40.119739  475694 command_runner.go:130] > # pinns_path = ""
	I1216 04:29:40.119766  475694 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 04:29:40.119787  475694 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 04:29:40.119820  475694 command_runner.go:130] > # enable_criu_support = true
	I1216 04:29:40.119849  475694 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 04:29:40.119870  475694 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 04:29:40.119901  475694 command_runner.go:130] > # enable_pod_events = false
	I1216 04:29:40.119923  475694 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 04:29:40.119945  475694 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 04:29:40.119977  475694 command_runner.go:130] > # default_runtime = "crun"
	I1216 04:29:40.120005  475694 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 04:29:40.120029  475694 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 04:29:40.120074  475694 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 04:29:40.120094  475694 command_runner.go:130] > # creation as a file is not desired either.
	I1216 04:29:40.120134  475694 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 04:29:40.120162  475694 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 04:29:40.120182  475694 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 04:29:40.120216  475694 command_runner.go:130] > # ]
	I1216 04:29:40.120248  475694 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 04:29:40.120270  475694 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 04:29:40.120320  475694 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 04:29:40.120347  475694 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 04:29:40.120396  475694 command_runner.go:130] > #
	I1216 04:29:40.120416  475694 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 04:29:40.120435  475694 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 04:29:40.120469  475694 command_runner.go:130] > # runtime_type = "oci"
	I1216 04:29:40.120490  475694 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 04:29:40.120514  475694 command_runner.go:130] > # inherit_default_runtime = false
	I1216 04:29:40.120552  475694 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 04:29:40.120570  475694 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 04:29:40.120589  475694 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 04:29:40.120618  475694 command_runner.go:130] > # monitor_env = []
	I1216 04:29:40.120639  475694 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 04:29:40.120667  475694 command_runner.go:130] > # allowed_annotations = []
	I1216 04:29:40.120700  475694 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 04:29:40.120720  475694 command_runner.go:130] > # no_sync_log = false
	I1216 04:29:40.120739  475694 command_runner.go:130] > # default_annotations = {}
	I1216 04:29:40.120771  475694 command_runner.go:130] > # stream_websockets = false
	I1216 04:29:40.120795  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.120859  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.120892  475694 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 04:29:40.120926  475694 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 04:29:40.120956  475694 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 04:29:40.120976  475694 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 04:29:40.121008  475694 command_runner.go:130] > #   in $PATH.
	I1216 04:29:40.121038  475694 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 04:29:40.121057  475694 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 04:29:40.121115  475694 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 04:29:40.121133  475694 command_runner.go:130] > #   state.
	I1216 04:29:40.121155  475694 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 04:29:40.121189  475694 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 04:29:40.121228  475694 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 04:29:40.121250  475694 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 04:29:40.121270  475694 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 04:29:40.121300  475694 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 04:29:40.121328  475694 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 04:29:40.121349  475694 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 04:29:40.121370  475694 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 04:29:40.121404  475694 command_runner.go:130] > #   The currently recognized values are:
	I1216 04:29:40.121434  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 04:29:40.121457  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 04:29:40.121484  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 04:29:40.121518  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 04:29:40.121541  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 04:29:40.121564  475694 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 04:29:40.121592  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 04:29:40.121620  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 04:29:40.121640  475694 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 04:29:40.121671  475694 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 04:29:40.121692  475694 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 04:29:40.121712  475694 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 04:29:40.121747  475694 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 04:29:40.121775  475694 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 04:29:40.121796  475694 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 04:29:40.121818  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 04:29:40.121849  475694 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 04:29:40.121873  475694 command_runner.go:130] > #   deprecated option "conmon".
	I1216 04:29:40.121896  475694 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 04:29:40.121916  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 04:29:40.121945  475694 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 04:29:40.121969  475694 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 04:29:40.121989  475694 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 04:29:40.122009  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 04:29:40.122039  475694 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 04:29:40.122065  475694 command_runner.go:130] > #   conmon-rs by using:
	I1216 04:29:40.122085  475694 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 04:29:40.122108  475694 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 04:29:40.122138  475694 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 04:29:40.122166  475694 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 04:29:40.122184  475694 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 04:29:40.122204  475694 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 04:29:40.122236  475694 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 04:29:40.122262  475694 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 04:29:40.122285  475694 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 04:29:40.122332  475694 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 04:29:40.122360  475694 command_runner.go:130] > #   when a machine crash happens.
	I1216 04:29:40.122382  475694 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 04:29:40.122406  475694 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 04:29:40.122443  475694 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 04:29:40.122473  475694 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 04:29:40.122495  475694 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 04:29:40.122537  475694 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 04:29:40.122553  475694 command_runner.go:130] > #
	I1216 04:29:40.122572  475694 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 04:29:40.122589  475694 command_runner.go:130] > #
	I1216 04:29:40.122624  475694 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 04:29:40.122646  475694 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 04:29:40.122662  475694 command_runner.go:130] > #
	I1216 04:29:40.122693  475694 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 04:29:40.122721  475694 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 04:29:40.122737  475694 command_runner.go:130] > #
	I1216 04:29:40.122758  475694 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 04:29:40.122777  475694 command_runner.go:130] > # feature.
	I1216 04:29:40.122810  475694 command_runner.go:130] > #
	I1216 04:29:40.122842  475694 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 04:29:40.122863  475694 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 04:29:40.122893  475694 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 04:29:40.122913  475694 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 04:29:40.122933  475694 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 04:29:40.122960  475694 command_runner.go:130] > #
	I1216 04:29:40.122986  475694 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 04:29:40.123006  475694 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 04:29:40.123023  475694 command_runner.go:130] > #
	I1216 04:29:40.123043  475694 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 04:29:40.123079  475694 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 04:29:40.123096  475694 command_runner.go:130] > #
	I1216 04:29:40.123117  475694 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 04:29:40.123147  475694 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 04:29:40.123171  475694 command_runner.go:130] > # limitation.
	I1216 04:29:40.123187  475694 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 04:29:40.123204  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 04:29:40.123225  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123264  475694 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 04:29:40.123284  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123302  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123331  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123357  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123375  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123394  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123413  475694 command_runner.go:130] > allowed_annotations = [
	I1216 04:29:40.123445  475694 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 04:29:40.123463  475694 command_runner.go:130] > ]
	I1216 04:29:40.123482  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123501  475694 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 04:29:40.123534  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 04:29:40.123552  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123570  475694 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 04:29:40.123589  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123625  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123644  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123670  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123707  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123742  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123785  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123815  475694 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 04:29:40.123837  475694 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 04:29:40.123859  475694 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 04:29:40.123892  475694 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 04:29:40.123918  475694 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 04:29:40.123943  475694 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 04:29:40.123978  475694 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 04:29:40.123998  475694 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 04:29:40.124022  475694 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 04:29:40.124054  475694 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 04:29:40.124075  475694 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 04:29:40.124108  475694 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 04:29:40.124142  475694 command_runner.go:130] > # Example:
	I1216 04:29:40.124163  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 04:29:40.124183  475694 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 04:29:40.124217  475694 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 04:29:40.124245  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 04:29:40.124262  475694 command_runner.go:130] > # cpuset = "0-1"
	I1216 04:29:40.124279  475694 command_runner.go:130] > # cpushares = "5"
	I1216 04:29:40.124296  475694 command_runner.go:130] > # cpuquota = "1000"
	I1216 04:29:40.124329  475694 command_runner.go:130] > # cpuperiod = "100000"
	I1216 04:29:40.124347  475694 command_runner.go:130] > # cpulimit = "35"
	I1216 04:29:40.124367  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.124385  475694 command_runner.go:130] > # The workload name is workload-type.
	I1216 04:29:40.124421  475694 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 04:29:40.124440  475694 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 04:29:40.124460  475694 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 04:29:40.124492  475694 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 04:29:40.124517  475694 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 04:29:40.124536  475694 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 04:29:40.124556  475694 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 04:29:40.124575  475694 command_runner.go:130] > # Default value is set to true
	I1216 04:29:40.124610  475694 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 04:29:40.124630  475694 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 04:29:40.124649  475694 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 04:29:40.124667  475694 command_runner.go:130] > # Default value is set to 'false'
	I1216 04:29:40.124699  475694 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 04:29:40.124718  475694 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 04:29:40.124741  475694 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 04:29:40.124768  475694 command_runner.go:130] > # timezone = ""
	I1216 04:29:40.124795  475694 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 04:29:40.124810  475694 command_runner.go:130] > #
	I1216 04:29:40.124829  475694 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 04:29:40.124850  475694 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 04:29:40.124892  475694 command_runner.go:130] > [crio.image]
	I1216 04:29:40.124912  475694 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 04:29:40.124930  475694 command_runner.go:130] > # default_transport = "docker://"
	I1216 04:29:40.124959  475694 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 04:29:40.125019  475694 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125026  475694 command_runner.go:130] > # global_auth_file = ""
	I1216 04:29:40.125031  475694 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 04:29:40.125036  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125041  475694 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.125093  475694 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 04:29:40.125106  475694 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125111  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125121  475694 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 04:29:40.125127  475694 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 04:29:40.125133  475694 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 04:29:40.125139  475694 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 04:29:40.125145  475694 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 04:29:40.125160  475694 command_runner.go:130] > # pause_command = "/pause"
	I1216 04:29:40.125167  475694 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 04:29:40.125172  475694 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 04:29:40.125178  475694 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 04:29:40.125184  475694 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 04:29:40.125190  475694 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 04:29:40.125198  475694 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 04:29:40.125209  475694 command_runner.go:130] > # pinned_images = [
	I1216 04:29:40.125213  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125219  475694 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 04:29:40.125226  475694 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 04:29:40.125232  475694 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 04:29:40.125238  475694 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 04:29:40.125243  475694 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 04:29:40.125248  475694 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 04:29:40.125253  475694 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 04:29:40.125268  475694 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 04:29:40.125275  475694 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 04:29:40.125281  475694 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 04:29:40.125287  475694 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 04:29:40.125291  475694 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 04:29:40.125298  475694 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 04:29:40.125304  475694 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 04:29:40.125308  475694 command_runner.go:130] > # changing them here.
	I1216 04:29:40.125313  475694 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 04:29:40.125317  475694 command_runner.go:130] > # insecure_registries = [
	I1216 04:29:40.125325  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125331  475694 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 04:29:40.125338  475694 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 04:29:40.125343  475694 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 04:29:40.125348  475694 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 04:29:40.125352  475694 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 04:29:40.125358  475694 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 04:29:40.125365  475694 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 04:29:40.125369  475694 command_runner.go:130] > # auto_reload_registries = false
	I1216 04:29:40.125375  475694 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 04:29:40.125386  475694 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 04:29:40.125392  475694 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 04:29:40.125396  475694 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 04:29:40.125400  475694 command_runner.go:130] > # The mode of short name resolution.
	I1216 04:29:40.125406  475694 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 04:29:40.125414  475694 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 04:29:40.125419  475694 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 04:29:40.125422  475694 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 04:29:40.125428  475694 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 04:29:40.125435  475694 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 04:29:40.125439  475694 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 04:29:40.125445  475694 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 04:29:40.125449  475694 command_runner.go:130] > # CNI plugins.
	I1216 04:29:40.125456  475694 command_runner.go:130] > [crio.network]
	I1216 04:29:40.125462  475694 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 04:29:40.125467  475694 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 04:29:40.125471  475694 command_runner.go:130] > # cni_default_network = ""
	I1216 04:29:40.125476  475694 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 04:29:40.125481  475694 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 04:29:40.125487  475694 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 04:29:40.125498  475694 command_runner.go:130] > # plugin_dirs = [
	I1216 04:29:40.125501  475694 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 04:29:40.125504  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125508  475694 command_runner.go:130] > # List of included pod metrics.
	I1216 04:29:40.125512  475694 command_runner.go:130] > # included_pod_metrics = [
	I1216 04:29:40.125515  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125521  475694 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 04:29:40.125524  475694 command_runner.go:130] > [crio.metrics]
	I1216 04:29:40.125529  475694 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 04:29:40.125533  475694 command_runner.go:130] > # enable_metrics = false
	I1216 04:29:40.125537  475694 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 04:29:40.125542  475694 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 04:29:40.125549  475694 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 04:29:40.125557  475694 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 04:29:40.125564  475694 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 04:29:40.125568  475694 command_runner.go:130] > # metrics_collectors = [
	I1216 04:29:40.125572  475694 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 04:29:40.125576  475694 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 04:29:40.125580  475694 command_runner.go:130] > # 	"containers_oom_total",
	I1216 04:29:40.125584  475694 command_runner.go:130] > # 	"processes_defunct",
	I1216 04:29:40.125587  475694 command_runner.go:130] > # 	"operations_total",
	I1216 04:29:40.125591  475694 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 04:29:40.125596  475694 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 04:29:40.125600  475694 command_runner.go:130] > # 	"operations_errors_total",
	I1216 04:29:40.125604  475694 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 04:29:40.125608  475694 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 04:29:40.125615  475694 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 04:29:40.125619  475694 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 04:29:40.125623  475694 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 04:29:40.125627  475694 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 04:29:40.125632  475694 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 04:29:40.125636  475694 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 04:29:40.125640  475694 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 04:29:40.125643  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125649  475694 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 04:29:40.125653  475694 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 04:29:40.125658  475694 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 04:29:40.125662  475694 command_runner.go:130] > # metrics_port = 9090
	I1216 04:29:40.125667  475694 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 04:29:40.125670  475694 command_runner.go:130] > # metrics_socket = ""
	I1216 04:29:40.125678  475694 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 04:29:40.125684  475694 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 04:29:40.125690  475694 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 04:29:40.125694  475694 command_runner.go:130] > # certificate on any modification event.
	I1216 04:29:40.125698  475694 command_runner.go:130] > # metrics_cert = ""
	I1216 04:29:40.125703  475694 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 04:29:40.125708  475694 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 04:29:40.125711  475694 command_runner.go:130] > # metrics_key = ""
	I1216 04:29:40.125718  475694 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 04:29:40.125721  475694 command_runner.go:130] > [crio.tracing]
	I1216 04:29:40.125726  475694 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 04:29:40.125730  475694 command_runner.go:130] > # enable_tracing = false
	I1216 04:29:40.125735  475694 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 04:29:40.125740  475694 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 04:29:40.125747  475694 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 04:29:40.125753  475694 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 04:29:40.125757  475694 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 04:29:40.125760  475694 command_runner.go:130] > [crio.nri]
	I1216 04:29:40.125764  475694 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 04:29:40.125772  475694 command_runner.go:130] > # enable_nri = true
	I1216 04:29:40.125776  475694 command_runner.go:130] > # NRI socket to listen on.
	I1216 04:29:40.125781  475694 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 04:29:40.125785  475694 command_runner.go:130] > # NRI plugin directory to use.
	I1216 04:29:40.125789  475694 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 04:29:40.125794  475694 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 04:29:40.125799  475694 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 04:29:40.125804  475694 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 04:29:40.125861  475694 command_runner.go:130] > # nri_disable_connections = false
	I1216 04:29:40.125867  475694 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 04:29:40.125871  475694 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 04:29:40.125876  475694 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 04:29:40.125881  475694 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 04:29:40.125885  475694 command_runner.go:130] > # NRI default validator configuration.
	I1216 04:29:40.125892  475694 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 04:29:40.125898  475694 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 04:29:40.125902  475694 command_runner.go:130] > # can be restricted/rejected:
	I1216 04:29:40.125905  475694 command_runner.go:130] > # - OCI hook injection
	I1216 04:29:40.125910  475694 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 04:29:40.125915  475694 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 04:29:40.125919  475694 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 04:29:40.125923  475694 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 04:29:40.125929  475694 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 04:29:40.125936  475694 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 04:29:40.125941  475694 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 04:29:40.125944  475694 command_runner.go:130] > #
	I1216 04:29:40.125948  475694 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 04:29:40.125953  475694 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 04:29:40.125958  475694 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 04:29:40.125963  475694 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 04:29:40.125969  475694 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 04:29:40.125974  475694 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 04:29:40.125979  475694 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 04:29:40.125986  475694 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 04:29:40.125991  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125996  475694 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 04:29:40.126002  475694 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 04:29:40.126007  475694 command_runner.go:130] > [crio.stats]
	I1216 04:29:40.126013  475694 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 04:29:40.126018  475694 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 04:29:40.126022  475694 command_runner.go:130] > # stats_collection_period = 0
	I1216 04:29:40.126028  475694 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 04:29:40.126034  475694 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 04:29:40.126038  475694 command_runner.go:130] > # collection_period = 0
	I1216 04:29:40.126084  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086834829Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 04:29:40.126093  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086875912Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 04:29:40.126103  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086913837Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 04:29:40.126111  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086943031Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 04:29:40.126123  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087027733Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:40.126132  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087362399Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 04:29:40.126142  475694 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 04:29:40.126226  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:40.126235  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:40.126255  475694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:29:40.126277  475694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:29:40.126422  475694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:29:40.126497  475694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:29:40.134815  475694 command_runner.go:130] > kubeadm
	I1216 04:29:40.134839  475694 command_runner.go:130] > kubectl
	I1216 04:29:40.134844  475694 command_runner.go:130] > kubelet
	I1216 04:29:40.134872  475694 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:29:40.134932  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:29:40.143529  475694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:29:40.156375  475694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:29:40.169188  475694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 04:29:40.182223  475694 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:29:40.185968  475694 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:29:40.186105  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:40.327743  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:41.068736  475694 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:29:41.068757  475694 certs.go:195] generating shared ca certs ...
	I1216 04:29:41.068779  475694 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.069050  475694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:29:41.069145  475694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:29:41.069172  475694 certs.go:257] generating profile certs ...
	I1216 04:29:41.069366  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:29:41.069439  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:29:41.069492  475694 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:29:41.069508  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:29:41.069527  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:29:41.069550  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:29:41.069568  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:29:41.069598  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:29:41.069624  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:29:41.069636  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:29:41.069661  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:29:41.069722  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:29:41.069792  475694 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:29:41.069804  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:29:41.069832  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:29:41.069864  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:29:41.069933  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:29:41.070011  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:41.070050  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.070068  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.070082  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem -> /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.070740  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:29:41.088516  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:29:41.106273  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:29:41.124169  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:29:41.142346  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:29:41.160632  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:29:41.181690  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:29:41.199949  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:29:41.217789  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:29:41.237601  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:29:41.255073  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:29:41.272738  475694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:29:41.286149  475694 ssh_runner.go:195] Run: openssl version
	I1216 04:29:41.292023  475694 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:29:41.292477  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.299852  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:29:41.307795  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312150  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312182  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312250  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.353168  475694 command_runner.go:130] > 3ec20f2e
	I1216 04:29:41.353674  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:29:41.362516  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.370150  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:29:41.377841  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381956  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381986  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.382040  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.422880  475694 command_runner.go:130] > b5213941
	I1216 04:29:41.423347  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:29:41.430980  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.438640  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:29:41.446570  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450618  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450691  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450770  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.493534  475694 command_runner.go:130] > 51391683
	I1216 04:29:41.494044  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:29:41.501730  475694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505651  475694 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505723  475694 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:29:41.505736  475694 command_runner.go:130] > Device: 259,1	Inode: 1313043     Links: 1
	I1216 04:29:41.505744  475694 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:41.505751  475694 command_runner.go:130] > Access: 2025-12-16 04:25:32.918538317 +0000
	I1216 04:29:41.505756  475694 command_runner.go:130] > Modify: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505760  475694 command_runner.go:130] > Change: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505765  475694 command_runner.go:130] >  Birth: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505860  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:29:41.547026  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.547554  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:29:41.588926  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.589431  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:29:41.630503  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.630976  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:29:41.679374  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.679872  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:29:41.720872  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.720962  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:29:41.763843  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.764306  475694 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:41.764397  475694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:29:41.764473  475694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:29:41.794813  475694 cri.go:89] found id: ""
	I1216 04:29:41.795018  475694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:29:41.802238  475694 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:29:41.802260  475694 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:29:41.802267  475694 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:29:41.803148  475694 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:29:41.803169  475694 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:29:41.803241  475694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:29:41.810442  475694 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:29:41.810892  475694 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-763073" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811005  475694 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-438353/kubeconfig needs updating (will repair): [kubeconfig missing "functional-763073" cluster setting kubeconfig missing "functional-763073" context setting]
	I1216 04:29:41.811272  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.811696  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811844  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.812430  475694 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:29:41.812449  475694 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:29:41.812455  475694 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:29:41.812459  475694 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:29:41.812464  475694 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:29:41.812504  475694 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:29:41.812753  475694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:29:41.827245  475694 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 04:29:41.827324  475694 kubeadm.go:602] duration metric: took 24.148626ms to restartPrimaryControlPlane
	I1216 04:29:41.827348  475694 kubeadm.go:403] duration metric: took 63.050551ms to StartCluster
	I1216 04:29:41.827392  475694 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.827497  475694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.828225  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.828522  475694 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:29:41.828868  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:41.828926  475694 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:29:41.829003  475694 addons.go:70] Setting storage-provisioner=true in profile "functional-763073"
	I1216 04:29:41.829025  475694 addons.go:239] Setting addon storage-provisioner=true in "functional-763073"
	I1216 04:29:41.829051  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.829717  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.829866  475694 addons.go:70] Setting default-storageclass=true in profile "functional-763073"
	I1216 04:29:41.829889  475694 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-763073"
	I1216 04:29:41.830179  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.835425  475694 out.go:179] * Verifying Kubernetes components...
	I1216 04:29:41.843204  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:41.852282  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.852487  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.852847  475694 addons.go:239] Setting addon default-storageclass=true in "functional-763073"
	I1216 04:29:41.852883  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.853441  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.902066  475694 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:29:41.905129  475694 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:41.905181  475694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:29:41.905276  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.908977  475694 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:41.909002  475694 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:29:41.909132  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.960105  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:41.975058  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:42.043859  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:42.092471  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:42.106008  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:42.818195  475694 node_ready.go:35] waiting up to 6m0s for node "functional-763073" to be "Ready" ...
	I1216 04:29:42.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:29:42.818432  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:42.818659  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818682  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818701  475694 retry.go:31] will retry after 327.643243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818740  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818752  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818759  475694 retry.go:31] will retry after 171.339125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:42.990327  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.052462  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.052555  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.052597  475694 retry.go:31] will retry after 320.089446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.146742  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.207665  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.212209  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.212243  475694 retry.go:31] will retry after 291.464307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.318472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.318814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.373308  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.435189  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.435254  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.435280  475694 retry.go:31] will retry after 781.758867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.504448  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.571334  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.571371  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.571390  475694 retry.go:31] will retry after 332.937553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.818906  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.818991  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.819297  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.904706  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.962384  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.966307  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.966396  475694 retry.go:31] will retry after 1.136896719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.217759  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:44.279618  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:44.283381  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.283415  475694 retry.go:31] will retry after 1.1051557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.318552  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.318673  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.319015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:44.818498  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.818571  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:44.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:45.103534  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:45.194787  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.195010  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.195099  475694 retry.go:31] will retry after 1.211699823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.319146  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.319235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.319562  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:45.388763  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:45.456804  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.456849  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.456877  475694 retry.go:31] will retry after 720.865488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.819295  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.819381  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.819670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.178239  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:46.241684  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.241730  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.241750  475694 retry.go:31] will retry after 2.398929444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.318930  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.319008  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.319303  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.407630  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:46.476894  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.476941  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.476959  475694 retry.go:31] will retry after 1.300502308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.818702  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.819124  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:46.819187  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:47.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.318594  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.778651  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:47.819040  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.819112  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.836852  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:47.840282  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:47.840312  475694 retry.go:31] will retry after 3.994114703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.318482  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.318555  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:48.641498  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:48.705855  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:48.705903  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.705923  475694 retry.go:31] will retry after 1.757515206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.819100  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.819185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.819457  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:48.819514  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:49.319285  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.319697  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:49.819385  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.318415  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.318509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.464331  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:50.523255  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:50.523310  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.523330  475694 retry.go:31] will retry after 5.029530817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.318457  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:51.318895  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:51.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.819120  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.834846  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:51.906733  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:51.906789  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:51.906807  475694 retry.go:31] will retry after 4.132534587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:52.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.319782  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:52.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.818481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.818820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.318781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.818436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:53.818768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:54.318470  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.318855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:54.818416  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.818791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.318563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.318906  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.553265  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:55.626702  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:55.630832  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.630867  475694 retry.go:31] will retry after 7.132223529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.819263  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.819349  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.819703  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:55.819756  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:56.040181  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:56.104678  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:56.104716  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.104735  475694 retry.go:31] will retry after 8.857583825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.319119  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.319453  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:56.819390  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.819466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.819757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.319466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.818722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:58.319396  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.319513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.319927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:58.319980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:58.818648  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.818727  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.318403  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.818568  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.318660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.818779  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.818900  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.819255  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:00.819314  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:01.318812  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.318904  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.319269  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:01.818988  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.819066  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.819335  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.319195  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.319286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.763349  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:02.818891  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.818969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.819274  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.830785  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:02.830835  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:02.830855  475694 retry.go:31] will retry after 11.115111011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:03.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.318492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:03.318795  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:03.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.818567  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.318356  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.318791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.819354  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.819425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.963132  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:05.030528  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:05.030573  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.030594  475694 retry.go:31] will retry after 13.807129774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.319025  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.319109  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.319430  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:05.319487  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:05.819077  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.819160  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.819454  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.319216  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.319298  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.319561  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.818566  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.818640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.818960  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.319006  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.319080  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.319410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.819153  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.819235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.819526  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:07.819580  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:08.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.319439  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.319857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:08.818460  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.318512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.318769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.818489  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.818572  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:10.318546  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.319011  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:10.319072  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:10.818626  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.819016  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.818916  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.818993  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.819322  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:12.319122  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.319197  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.319465  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:12.319515  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:12.819218  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.819289  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.819619  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.318424  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.318745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.946231  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:14.010550  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:14.014827  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.014869  475694 retry.go:31] will retry after 8.112010712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.319336  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.319410  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.319731  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:14.319784  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:14.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.818426  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.818781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.319376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.319444  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.319700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.818487  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:16.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.319765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:16.319823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:16.818746  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.818828  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.819089  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.318878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.818576  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.818652  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.818985  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.318670  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.319008  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:18.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:18.838055  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:18.893739  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:18.897596  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:18.897631  475694 retry.go:31] will retry after 11.366080685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:19.319301  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.319380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.319681  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:19.819376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.819724  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.818403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:21.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:21.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:21.818866  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.818958  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.819324  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.127748  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:22.189082  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:22.189129  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.189148  475694 retry.go:31] will retry after 27.844564007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.319433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.818358  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.818435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.818698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:23.319415  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.319492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.319809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:23.319865  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:23.818531  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.818610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.818962  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.318495  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.318564  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.318816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.818435  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.818856  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.318545  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.318628  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.318920  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.818420  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:25.818900  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:26.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.318905  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:26.818764  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.819183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.318950  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.319026  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.319288  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.819187  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.819262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.819610  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:27.819670  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:28.319414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.319507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.319802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:28.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.818505  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.818767  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.318476  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.318919  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.818707  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.819030  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:30.264789  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:30.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:30.329449  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:30.329484  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.329503  475694 retry.go:31] will retry after 18.349811318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.819293  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.318473  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.318550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.318884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.818872  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.818940  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:32.319072  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.319152  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.319497  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:32.319550  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:32.819264  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.319325  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.319391  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.319698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.318569  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.318644  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.318965  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:34.819051  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:35.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:35.818450  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.818528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.318610  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.318679  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.318948  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.818786  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.818871  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.819206  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:36.819259  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:37.318997  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.319078  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.319374  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:37.819133  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.819207  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.819482  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.319323  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.319397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.319736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.818843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:39.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.318474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.318729  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:39.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:39.818457  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.318693  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.319014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.818414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.818482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:41.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.318542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:41.318917  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:41.819023  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.819096  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.319088  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.319177  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.319455  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.819310  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.819387  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:43.818851  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:44.318448  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.318869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:44.818424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.818501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.318828  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.318911  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.319336  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.819228  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.819306  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.819658  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:45.819718  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:46.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:46.818660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.318699  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.318774  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.319086  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.818830  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:48.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:48.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:48.679520  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:48.741510  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:48.741587  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.741616  475694 retry.go:31] will retry after 29.090794722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.818706  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.818780  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.819102  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.318810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.818809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:50.034416  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:50.096674  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:50.100468  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.100502  475694 retry.go:31] will retry after 39.426681546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.318852  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.318933  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.319214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:50.319264  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:50.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.819159  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.819546  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.319385  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.319643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.818732  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.819127  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.318819  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.318894  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.319218  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.818982  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.819057  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:52.819370  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:53.319110  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.319188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.319511  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:53.819108  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.819188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.819533  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.319331  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.319714  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.818392  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.818470  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.818795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:55.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:55.318874  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:55.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.818691  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.818767  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.819103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.318465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.819813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:57.819868  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:58.318364  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:58.819413  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.819488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.318514  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.818497  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.818583  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.818942  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:00.327892  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.327986  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.328316  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:00.328364  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:00.819096  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.819170  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.319773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.818911  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.819294  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.319118  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.319418  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.819093  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.819166  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.819505  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:02.819563  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:03.319107  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.319185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.319442  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:03.819184  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.819264  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.819590  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.319286  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.319688  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.818746  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:05.318450  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.318528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.318837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:05.318887  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:05.818417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.818534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.318784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.818697  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.818768  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.819055  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:07.319228  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.319300  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.319611  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:07.319663  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:07.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.318858  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.818509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.318534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.318615  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:09.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:10.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:10.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.818634  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.818898  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.818867  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.818943  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.819292  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:11.819345  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:12.319080  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.319411  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:12.819163  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.819597  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.319410  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.319484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.818534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.818607  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:14.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.318819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:14.318867  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:14.818523  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.318504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.818863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.318822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.818718  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.818992  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:16.819042  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:17.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.818831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.833208  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:31:17.902395  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906323  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906439  475694 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:18.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:18.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:19.318591  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.319009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:19.319064  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:19.818355  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.818687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.319419  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.319793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.318502  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.318570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.819058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.819153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.819506  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:21.819565  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:22.319388  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.319835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:22.819358  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.819430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.318411  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:24.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.318789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:24.318840  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:24.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.818448  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.818741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.818645  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.818926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.318381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.318457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.318786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.818755  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.818868  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.819189  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:26.819243  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:27.318982  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.319054  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.319361  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:27.819127  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.819233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.319236  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.819411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:28.819786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:29.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:29.528240  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:31:29.598877  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598918  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598995  475694 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:29.602136  475694 out.go:179] * Enabled addons: 
	I1216 04:31:29.604114  475694 addons.go:530] duration metric: took 1m47.775177414s for enable addons: enabled=[]
	I1216 04:31:29.818770  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.818886  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.819272  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.319147  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.319404  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.819315  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.819674  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:31.318340  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.318743  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:31.318800  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:31.818902  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.819227  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.319058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.319135  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.319508  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.819330  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.819408  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:33.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:33.318863  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:33.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.818785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.318363  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.318438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.318790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.819369  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.819438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.819713  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:35.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.318872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:35.318943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:35.818615  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.818692  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.819009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.818925  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.819003  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:37.319361  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.319459  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.319790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:37.319835  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:37.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.818525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.318529  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.318609  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.318895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:39.818858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:40.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.318507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:40.818376  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.818450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.818707  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.318824  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.319203  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.819296  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.819635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:41.819695  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:42.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.319800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:42.818819  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.818916  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.819270  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.319056  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.319132  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.319459  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.819240  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.819310  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.819650  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:44.319420  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.319840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:44.319896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:44.818558  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.818637  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.818980  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.318674  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.319042  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.818761  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.818837  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.819095  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:46.819145  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:47.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.318515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.318857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:47.818554  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.818627  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.818943  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.318744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.818844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:49.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.318533  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:49.318926  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:49.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.818832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.318907  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.818617  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.818699  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.819034  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:51.318728  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.318799  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.319084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:51.319133  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:51.819260  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.819337  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.819646  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.319367  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.319460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.319796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.818735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.818542  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:53.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:54.318422  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.318498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:54.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.318417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.318540  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:56.318390  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.318813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:56.318866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:56.818718  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.818805  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.819146  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.318413  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.318738  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.818490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:58.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.319808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:58.319866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:58.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.818811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.818868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.318393  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:00.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:01.318351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.318435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:01.818924  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.819034  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.819306  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.319090  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.819160  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.819573  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:02.819634  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.318419  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:03.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.818850  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.818766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:05.318488  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.318585  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.318952  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:05.319013  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:05.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.818766  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.318837  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.318913  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.319181  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.819150  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.819232  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.819586  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:07.319256  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.319343  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.319687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:07.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:07.819375  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.819717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.818488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.818845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.318495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.818410  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.818492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.818839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:09.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:10.318578  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.318664  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.319047  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:10.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.818852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.819114  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.819011  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.819097  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.819452  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:11.819512  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:12.319053  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.319128  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.319419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:12.819173  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.819252  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.819584  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.319194  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.319275  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.319589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.819219  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.819286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.819552  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:13.819595  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:14.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.319816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:14.818518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.818951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:16.318368  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.318450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:16.318842  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:16.818645  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.818981  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.318766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.818482  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:18.318566  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.318640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.318945  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:18.319006  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:18.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.818516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.818842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.318516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.819341  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.819415  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.819722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.318801  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.818494  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:20.818924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:21.318563  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.318632  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.318896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:21.818868  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.818945  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.819262  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.318832  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.318939  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.319249  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.818805  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.818880  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.819174  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:22.819224  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:23.318762  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.318839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.319185  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:23.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.819074  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.819390  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.319208  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.319468  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.819344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:24.819813  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:25.318436  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.318844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:25.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.818705  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.818789  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.819111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:27.318783  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.318852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.319112  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:27.319155  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:27.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.818848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.318875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.818822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.318518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.318617  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:29.819143  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:30.318804  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.318881  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:30.818908  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.819365  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.319128  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.319211  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.319551  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.818625  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.819005  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:32.318377  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.318779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:32.318830  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:32.818478  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.818890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.818487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:34.318540  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.318621  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.318936  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:34.318997  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:34.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.818510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.818779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.818530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.818878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:36.318556  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.318986  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:36.319033  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:36.818822  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.818905  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.819233  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.319068  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.319154  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.319493  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.819197  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.819270  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.819602  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:38.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.319452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:38.319827  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:38.818447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.818527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.318937  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.818731  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.819050  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.318767  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.319183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.818948  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.819022  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.819278  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:40.819323  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:41.319042  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.319117  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.319435  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:41.818623  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.818705  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.819037  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.818437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:43.318459  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.318887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:43.318945  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:43.819357  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.819431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.819742  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.318455  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.818656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:45.318689  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.318765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:45.319110  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:45.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.318349  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.818690  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.818773  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.819032  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.818472  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.818551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:47.818986  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:48.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.319715  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:48.818461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.319434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.819351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.819434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.819700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:49.819743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:50.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.318483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:50.818463  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.818546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.318426  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.318508  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.818955  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.819039  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.819431  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:52.319209  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.319637  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:52.319692  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:52.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.818449  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.818711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.318405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:54.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.319718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:54.319768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:54.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.818896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.318601  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.318680  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.818723  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.818804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.819074  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.318436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.818730  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.818807  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.819167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:56.819227  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:57.318894  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.318969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.319232  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:57.818968  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.819399  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.319214  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.319634  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.819672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:58.819714  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:59.318342  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.318420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:59.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.818911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.319047  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.319356  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.819156  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.819244  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.819576  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:01.319425  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.319520  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.319865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:01.319922  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:01.818853  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.818926  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.319032  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.319108  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.319434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.819246  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.819327  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.319320  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.319661  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.818365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.818761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:03.818823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:04.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.318596  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.318928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:04.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.818807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:05.818960  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:06.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.318787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:06.818785  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.818857  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.819145  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.318817  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.318891  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.818978  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.819056  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.819319  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:07.819368  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:08.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.319217  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.319580  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:08.819296  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:10.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:10.318924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:10.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.818479  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.818769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.318449  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.819090  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:12.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.319221  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.319548  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:12.319601  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:12.819365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.819440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.819754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.318386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.318466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.819148  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.819223  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.819475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:14.319233  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.319642  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:14.319694  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:14.819321  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.819398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.819744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.318773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.318633  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.318712  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.818735  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.819070  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:16.819115  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:17.318776  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.318851  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.319191  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:17.818960  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.819386  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.319157  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.819267  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.819339  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.819652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:18.819699  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:19.319379  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.319785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:19.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.818428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.318802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.818454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.818885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:21.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.318772  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:21.318812  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:21.818938  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.819385  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.319177  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.319262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.319560  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.819291  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.819372  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.819640  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:23.319349  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.319428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.319751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:23.319801  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:23.818459  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.818792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.318944  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.818409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.818745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:25.818786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:26.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:26.818684  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.818758  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.318750  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.318819  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.319109  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.818989  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.819067  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.819405  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:27.819467  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:28.319223  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.319304  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.319635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:28.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.319818  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.818393  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.818474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:30.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.318409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.318735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:30.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.818923  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.818850  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.818935  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:32.319014  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.319087  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.319396  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:32.319454  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:32.819204  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.819281  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.318343  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.318673  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.818346  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.318496  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.318588  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.318954  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.818515  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.818592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.818900  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:34.818954  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:35.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.318547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:35.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.319305  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.319382  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.818685  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.819006  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:36.819059  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:37.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.319017  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:37.819328  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.819394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.819638  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.818700  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.819026  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:38.819078  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:39.318710  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.318778  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.319044  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:39.819409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.819485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.819829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.818412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.818486  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:41.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:41.318906  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:41.819057  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.319351  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.319425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.319803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.818567  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.818642  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.818971  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:43.318717  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.318804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:43.319246  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:43.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.819063  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.318768  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.819022  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.819099  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.819428  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:45.319171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.319254  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.319544  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:45.319590  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:45.819402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.819848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.318583  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.319025  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.818846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.819141  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.318533  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.318611  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.318979  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:47.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:48.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.318751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:48.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.318697  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.319111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.818787  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.818863  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.819129  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:49.819172  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:50.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:50.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.818682  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.819022  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.318712  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.319167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.819070  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.819144  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.819478  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:51.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:52.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.319323  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.319652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:52.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.819441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.318783  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.818549  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:54.319385  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:54.319744  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:54.818347  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.818422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.818747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.318483  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.318963  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.818650  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.818724  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.819014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.318842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.818765  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.818843  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:56.819280  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:57.318987  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.319070  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.319350  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:57.819171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.819249  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.319778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.819329  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.819413  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:58.819797  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.318517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:59.818440  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.818866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.328767  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.328849  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.329179  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.819012  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.819093  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.819419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:01.319182  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.319271  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.319631  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:01.319685  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:01.818692  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.818765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.819031  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.318365  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.318443  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.818471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.818800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.318678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.818431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:03.818824  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:04.319347  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.319423  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:04.818540  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.818608  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.818855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.318534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.318911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.818570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.818899  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:05.818957  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:06.319348  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.319422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.319689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:06.818777  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.818855  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.819214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.319101  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.319438  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.819176  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.819248  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.819494  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:07.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:08.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.319324  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.319660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:08.819334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.819414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.819748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:10.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.318882  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:10.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:10.819332  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.819407  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.318447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.318755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.819077  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.819410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:12.319155  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.319475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:12.319519  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:12.819271  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.819346  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.819689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.318793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.818559  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.818826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.318885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.818639  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.818968  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:14.819021  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:15.318668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.319003  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:15.818382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.318521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.818753  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.818825  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.819126  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:16.819186  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:17.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.318558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:17.818418  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.818784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.818343  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.818802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:19.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:19.318915  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:19.818577  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.818646  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.818927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.318833  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:21.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.319702  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:21.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:21.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.819068  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.819437  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.319208  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.319613  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.819318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.819390  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.819643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.318762  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:23.818927  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:24.318334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.318402  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.318670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:24.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.818790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.318379  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.318455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.818514  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.818579  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:26.318398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:26.318858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:26.818668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.818748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.819069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.319709  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.818413  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:28.318554  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.318951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:28.319002  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:28.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.818493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.818750  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.319469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.319795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.818453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.818867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:30.319342  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.319416  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:30.319711  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:30.818394  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.818849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.818933  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.819001  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.819258  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.319093  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.819401  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:32.819825  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:33.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:33.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.818536  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.318539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.318890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.818406  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:35.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.318826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:35.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:35.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.818828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.318761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.818896  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.819296  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:37.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.318532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.318915  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:37.318974  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:37.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.818687  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.818946  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.318430  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.818653  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.818976  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:39.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.319717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:39.319766  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:39.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.819802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.319399  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.319478  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.319815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.819720  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.318432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.818913  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.818984  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.819332  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:41.819390  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:42.319148  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.319222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.319522  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:42.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.819397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.819739  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.319081  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.818751  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:44.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:44.318860  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:44.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.818518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.818902  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.319407  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.319489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.319845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.818371  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.818447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:46.318536  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.318974  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:46.319036  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:46.818922  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.819000  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.819277  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.319079  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.319486  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.819266  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:48.319327  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.319723  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:48.319773  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:48.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.818771  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.318493  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.318566  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.818551  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.318400  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.818522  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.818600  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.818928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:50.818980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:51.318625  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.318702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.319079  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:51.819046  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.819123  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.319344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.319417  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.319779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.818421  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.818829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:53.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:53.318897  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:53.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.818506  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.319276  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.319352  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.319592  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.819372  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.819794  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.318383  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.318468  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.818467  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.818798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:55.818839  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:56.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.318799  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:56.818695  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.818770  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.819054  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.318729  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.318810  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.319103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:57.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:58.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:58.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.818756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.318870  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.818542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.818859  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:59.818914  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:00.326681  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.327158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.327589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:00.818334  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.818414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.318487  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.318573  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.818952  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.819285  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:01.819326  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:02.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.319233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.319559  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:02.819407  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.819477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.819810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.318360  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.318434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.318682  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.818556  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.818922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:04.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:04.318896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:04.818557  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.818626  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.818950  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.818589  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.818665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.818795  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.818876  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.819216  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:06.819271  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:07.319075  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.319158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.319501  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:07.819216  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.819290  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.819547  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.319297  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.319373  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.319684  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.819455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.819785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:08.819836  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:09.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:09.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.318414  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.318493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.318815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.818498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.818758  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:11.318468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.318548  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:11.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:11.818967  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.819040  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.819370  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.318986  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.319065  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.319377  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.819142  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.819222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.819598  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:13.319413  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.319864  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:13.319929  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:13.818336  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.818409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.818500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.818819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.318797  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:15.818967  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:16.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.318843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:16.818768  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.819094  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.818465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:18.318479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.318546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:18.318849  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:18.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.818776  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.318510  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.318592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.818621  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.818973  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:20.318397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:20.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:20.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.818507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.318544  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.318656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.819053  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.819131  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.819472  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:22.319274  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.319345  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.319672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:22.319728  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:22.818396  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.818467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.318836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.818345  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.818420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.818765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.319370  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.319441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.818553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:24.818962  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:25.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.318430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:25.819340  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.819694  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.319409  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.319480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.319786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.818711  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:26.819158  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:27.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.318803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:27.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.818881  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.318434  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.318510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.318832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.818494  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.818812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:29.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:29.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:29.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.818455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.319321  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.319394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.818400  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.818475  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.818821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:31.318538  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.318610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.318926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:31.318982  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:31.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.819402  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.319162  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.319242  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.319568  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.819397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.819471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.819805  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.318749  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.818824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:33.818882  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:34.318388  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.318473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.318868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:34.819425  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.819500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.819756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.318883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.818457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:36.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.319450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.319711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:36.319751  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:36.818719  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.818823  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.819149  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.318863  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.318957  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.319340  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.819103  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.819178  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.819440  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.318528  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.318602  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.318927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:38.818930  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:39.318332  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.318414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.318736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:39.818477  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.818480  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.818825  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:41.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.318879  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:41.318931  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:41.818408  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.319418  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.319504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:42.319849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.818432  475694 node_ready.go:38] duration metric: took 6m0.000197669s for node "functional-763073" to be "Ready" ...
	I1216 04:35:42.821511  475694 out.go:203] 
	W1216 04:35:42.824400  475694 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:35:42.824420  475694 out.go:285] * 
	W1216 04:35:42.826578  475694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:35:42.829442  475694 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805841333Z" level=info msg="Using the internal default seccomp profile"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805849087Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805855709Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805861575Z" level=info msg="RDT not available in the host system"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.805877075Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.806774454Z" level=info msg="Conmon does support the --sync option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.806802081Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.806817876Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.807618582Z" level=info msg="Conmon does support the --sync option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.807656342Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.807829275Z" level=info msg="Updated default CNI network name to "
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.808388764Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.808772563Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.808831681Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.846853056Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.846899998Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.846964187Z" level=info msg="Create NRI interface"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.84713593Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847160546Z" level=info msg="runtime interface created"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847179369Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.84718654Z" level=info msg="runtime interface starting up..."
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847193703Z" level=info msg="starting plugins..."
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847212165Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:29:39 functional-763073 crio[5388]: time="2025-12-16T04:29:39.847303653Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:29:39 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:35:47.475637    8776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:47.476369    8776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:47.477978    8776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:47.478314    8776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:47.479674    8776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:35:47 up  3:18,  0 user,  load average: 0.72, 0.37, 0.81
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:35:44 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:45 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 16 04:35:45 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:45 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:45 functional-763073 kubelet[8649]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:45 functional-763073 kubelet[8649]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:45 functional-763073 kubelet[8649]: E1216 04:35:45.387418    8649 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:45 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:45 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:46 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 16 04:35:46 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:46 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:46 functional-763073 kubelet[8669]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:46 functional-763073 kubelet[8669]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:46 functional-763073 kubelet[8669]: E1216 04:35:46.130758    8669 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:46 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:46 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:46 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 16 04:35:46 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:46 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:46 functional-763073 kubelet[8690]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:46 functional-763073 kubelet[8690]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:46 functional-763073 kubelet[8690]: E1216 04:35:46.900027    8690 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:46 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:46 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (356.681657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 kubectl -- --context functional-763073 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 kubectl -- --context functional-763073 get pods: exit status 1 (108.295126ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-763073 kubectl -- --context functional-763073 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (305.894645ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 logs -n 25: (1.062339744s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format json --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format table --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls                                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ delete         │ -p functional-861171                                                                                                                              │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ start          │ -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ start          │ -p functional-763073 --alsologtostderr -v=8                                                                                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │                     │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:latest                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add minikube-local-cache-test:functional-763073                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache delete minikube-local-cache-test:functional-763073                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl images                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ cache          │ functional-763073 cache reload                                                                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ kubectl        │ functional-763073 kubectl -- --context functional-763073 get pods                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:29:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:29:36.794313  475694 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:29:36.794434  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794446  475694 out.go:374] Setting ErrFile to fd 2...
	I1216 04:29:36.794452  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794700  475694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:29:36.795091  475694 out.go:368] Setting JSON to false
	I1216 04:29:36.795948  475694 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11523,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:29:36.796022  475694 start.go:143] virtualization:  
	I1216 04:29:36.799564  475694 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:29:36.803377  475694 notify.go:221] Checking for updates...
	I1216 04:29:36.806471  475694 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:29:36.809418  475694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:29:36.812382  475694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:36.815368  475694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:29:36.818384  475694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:29:36.821299  475694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:29:36.824780  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:36.824898  475694 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:29:36.853440  475694 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:29:36.853553  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.911081  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.901976085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.911198  475694 docker.go:319] overlay module found
	I1216 04:29:36.914378  475694 out.go:179] * Using the docker driver based on existing profile
	I1216 04:29:36.917157  475694 start.go:309] selected driver: docker
	I1216 04:29:36.917180  475694 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.917338  475694 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:29:36.917450  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.970986  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.961820507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.971442  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:36.971503  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:36.971553  475694 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.974751  475694 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:29:36.977516  475694 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:29:36.980431  475694 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:29:36.983493  475694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:29:36.983530  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:36.983585  475694 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:29:36.983595  475694 cache.go:65] Caching tarball of preloaded images
	I1216 04:29:36.983676  475694 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:29:36.983683  475694 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:29:36.983782  475694 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:29:37.009018  475694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:29:37.009047  475694 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:29:37.009096  475694 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:29:37.009136  475694 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:29:37.009247  475694 start.go:364] duration metric: took 82.708µs to acquireMachinesLock for "functional-763073"
	I1216 04:29:37.009287  475694 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:29:37.009293  475694 fix.go:54] fixHost starting: 
	I1216 04:29:37.009582  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:37.028726  475694 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:29:37.028764  475694 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:29:37.032201  475694 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:29:37.032251  475694 machine.go:94] provisionDockerMachine start ...
	I1216 04:29:37.032362  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.050328  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.050673  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.050689  475694 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:29:37.192783  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.192826  475694 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:29:37.192931  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.211313  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.211628  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.211639  475694 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:29:37.354192  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.354269  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.376898  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.377254  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.377278  475694 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:29:37.509279  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:29:37.509306  475694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:29:37.509326  475694 ubuntu.go:190] setting up certificates
	I1216 04:29:37.509346  475694 provision.go:84] configureAuth start
	I1216 04:29:37.509406  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:37.527206  475694 provision.go:143] copyHostCerts
	I1216 04:29:37.527264  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527308  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:29:37.527320  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527395  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:29:37.527487  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527509  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:29:37.527517  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527545  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:29:37.527594  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527615  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:29:37.527622  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527648  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:29:37.527699  475694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:29:37.800879  475694 provision.go:177] copyRemoteCerts
	I1216 04:29:37.800949  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:29:37.800990  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.823288  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:37.920869  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 04:29:37.920929  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:29:37.938521  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 04:29:37.938583  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:29:37.956377  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 04:29:37.956439  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:29:37.974119  475694 provision.go:87] duration metric: took 464.750518ms to configureAuth
	I1216 04:29:37.974148  475694 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:29:37.974331  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:37.974450  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.991914  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.992233  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.992254  475694 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:29:38.308392  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:29:38.308467  475694 machine.go:97] duration metric: took 1.27620546s to provisionDockerMachine
	I1216 04:29:38.308501  475694 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:29:38.308543  475694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:29:38.308636  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:29:38.308736  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.327973  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.425975  475694 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:29:38.429465  475694 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:29:38.429486  475694 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:29:38.429491  475694 command_runner.go:130] > VERSION_ID="12"
	I1216 04:29:38.429495  475694 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:29:38.429500  475694 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:29:38.429503  475694 command_runner.go:130] > ID=debian
	I1216 04:29:38.429508  475694 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:29:38.429575  475694 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:29:38.429584  475694 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:29:38.429642  475694 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:29:38.429664  475694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:29:38.429675  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:29:38.429740  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:29:38.429824  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:29:38.429840  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /etc/ssl/certs/4417272.pem
	I1216 04:29:38.429918  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:29:38.429926  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> /etc/test/nested/copy/441727/hosts
	I1216 04:29:38.429973  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:29:38.438164  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:38.456472  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:29:38.474815  475694 start.go:296] duration metric: took 166.27897ms for postStartSetup
	I1216 04:29:38.474942  475694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:29:38.475008  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.493257  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.586186  475694 command_runner.go:130] > 13%
	I1216 04:29:38.586744  475694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:29:38.591214  475694 command_runner.go:130] > 169G
	I1216 04:29:38.591631  475694 fix.go:56] duration metric: took 1.582334669s for fixHost
	I1216 04:29:38.591655  475694 start.go:83] releasing machines lock for "functional-763073", held for 1.582392532s
	I1216 04:29:38.591756  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:38.610497  475694 ssh_runner.go:195] Run: cat /version.json
	I1216 04:29:38.610580  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.610804  475694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:29:38.610862  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.644780  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.648235  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.740654  475694 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 04:29:38.740792  475694 ssh_runner.go:195] Run: systemctl --version
	I1216 04:29:38.835621  475694 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 04:29:38.838633  475694 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:29:38.838716  475694 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:29:38.838811  475694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:29:38.876422  475694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:29:38.880827  475694 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:29:38.881001  475694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:29:38.881102  475694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:29:38.888966  475694 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:29:38.888992  475694 start.go:496] detecting cgroup driver to use...
	I1216 04:29:38.889023  475694 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:29:38.889116  475694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:29:38.904919  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:29:38.918230  475694 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:29:38.918296  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:29:38.934386  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:29:38.947903  475694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:29:39.064725  475694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:29:39.186461  475694 docker.go:234] disabling docker service ...
	I1216 04:29:39.186555  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:29:39.201259  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:29:39.214213  475694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:29:39.331697  475694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:29:39.468929  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:29:39.481743  475694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:29:39.494008  475694 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 04:29:39.494807  475694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:29:39.494889  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.503668  475694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:29:39.503751  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.513027  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.521738  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.530476  475694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:29:39.538796  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.547730  475694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.556341  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.565046  475694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:29:39.571643  475694 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:29:39.572565  475694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:29:39.579896  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:39.695396  475694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:29:39.852818  475694 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:29:39.852930  475694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:29:39.856967  475694 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 04:29:39.856989  475694 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:29:39.856996  475694 command_runner.go:130] > Device: 0,72	Inode: 1641        Links: 1
	I1216 04:29:39.857013  475694 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:39.857019  475694 command_runner.go:130] > Access: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857028  475694 command_runner.go:130] > Modify: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857036  475694 command_runner.go:130] > Change: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857040  475694 command_runner.go:130] >  Birth: -
	I1216 04:29:39.857332  475694 start.go:564] Will wait 60s for crictl version
	I1216 04:29:39.857393  475694 ssh_runner.go:195] Run: which crictl
	I1216 04:29:39.860635  475694 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:29:39.860907  475694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:29:39.883882  475694 command_runner.go:130] > Version:  0.1.0
	I1216 04:29:39.883905  475694 command_runner.go:130] > RuntimeName:  cri-o
	I1216 04:29:39.883910  475694 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 04:29:39.883916  475694 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:29:39.886266  475694 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:29:39.886355  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.912976  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.913004  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.913011  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.913016  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.913021  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.913026  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.913030  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.913034  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.913044  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.913048  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.913052  475694 command_runner.go:130] >      static
	I1216 04:29:39.913055  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.913059  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.913089  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.913094  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.913097  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.913101  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.913104  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.913108  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.913112  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.915574  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.945490  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.945513  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.945520  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.945525  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.945530  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.945534  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.945538  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.945543  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.945548  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.945551  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.945557  475694 command_runner.go:130] >      static
	I1216 04:29:39.945561  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.945587  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.945594  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.945598  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.945601  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.945607  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.945617  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.945623  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.945639  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.952832  475694 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:29:39.955738  475694 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:29:39.972578  475694 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:29:39.976813  475694 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 04:29:39.976940  475694 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:29:39.977085  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:39.977157  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.026676  475694 command_runner.go:130] > {
	I1216 04:29:40.026700  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.026707  475694 command_runner.go:130] >     {
	I1216 04:29:40.026715  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.026721  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026727  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.026731  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026736  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026745  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.026758  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.026762  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026770  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.026775  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026789  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026796  475694 command_runner.go:130] >     },
	I1216 04:29:40.026800  475694 command_runner.go:130] >     {
	I1216 04:29:40.026807  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.026815  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026820  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.026827  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026831  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026843  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.026852  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.026859  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026863  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.026867  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026879  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026883  475694 command_runner.go:130] >     },
	I1216 04:29:40.026895  475694 command_runner.go:130] >     {
	I1216 04:29:40.026906  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.026917  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026927  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.026930  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026934  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026942  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.026954  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.026962  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026966  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.026974  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.026979  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026985  475694 command_runner.go:130] >     },
	I1216 04:29:40.026988  475694 command_runner.go:130] >     {
	I1216 04:29:40.026995  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.027002  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027012  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.027019  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027023  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027031  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.027041  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.027047  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027052  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.027058  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027063  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027070  475694 command_runner.go:130] >       },
	I1216 04:29:40.027084  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027096  475694 command_runner.go:130] >     },
	I1216 04:29:40.027100  475694 command_runner.go:130] >     {
	I1216 04:29:40.027106  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.027114  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027119  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.027129  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027138  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027146  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.027157  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.027161  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027168  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.027171  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027175  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027183  475694 command_runner.go:130] >       },
	I1216 04:29:40.027187  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027192  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027200  475694 command_runner.go:130] >     },
	I1216 04:29:40.027203  475694 command_runner.go:130] >     {
	I1216 04:29:40.027214  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.027229  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027235  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.027241  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027245  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027254  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.027266  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.027269  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027278  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.027281  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027288  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027292  475694 command_runner.go:130] >       },
	I1216 04:29:40.027300  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027305  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027311  475694 command_runner.go:130] >     },
	I1216 04:29:40.027314  475694 command_runner.go:130] >     {
	I1216 04:29:40.027320  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.027324  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027333  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.027337  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027345  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027357  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.027366  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.027372  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027376  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.027384  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027389  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027395  475694 command_runner.go:130] >     },
	I1216 04:29:40.027399  475694 command_runner.go:130] >     {
	I1216 04:29:40.027405  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.027409  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027423  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.027430  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027434  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027442  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.027466  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.027473  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027478  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.027485  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027489  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027492  475694 command_runner.go:130] >       },
	I1216 04:29:40.027498  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027507  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027514  475694 command_runner.go:130] >     },
	I1216 04:29:40.027517  475694 command_runner.go:130] >     {
	I1216 04:29:40.027524  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.027531  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027536  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.027542  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027547  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027557  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.027568  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.027573  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027586  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.027593  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027598  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.027601  475694 command_runner.go:130] >       },
	I1216 04:29:40.027610  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027614  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.027620  475694 command_runner.go:130] >     }
	I1216 04:29:40.027623  475694 command_runner.go:130] >   ]
	I1216 04:29:40.027626  475694 command_runner.go:130] > }
	I1216 04:29:40.029894  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.029927  475694 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:29:40.029987  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.055653  475694 command_runner.go:130] > {
	I1216 04:29:40.055673  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.055678  475694 command_runner.go:130] >     {
	I1216 04:29:40.055687  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.055692  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055697  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.055701  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055705  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055715  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.055724  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.055728  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055732  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.055736  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055740  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055744  475694 command_runner.go:130] >     },
	I1216 04:29:40.055747  475694 command_runner.go:130] >     {
	I1216 04:29:40.055753  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.055757  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055762  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.055765  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055769  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055787  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.055795  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.055798  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055802  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.055806  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055817  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055820  475694 command_runner.go:130] >     },
	I1216 04:29:40.055824  475694 command_runner.go:130] >     {
	I1216 04:29:40.055830  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.055833  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055838  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.055841  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055845  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055854  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.055862  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.055865  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055869  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.055873  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.055876  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055879  475694 command_runner.go:130] >     },
	I1216 04:29:40.055882  475694 command_runner.go:130] >     {
	I1216 04:29:40.055891  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.055894  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055899  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.055904  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055908  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055915  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.055923  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.055926  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055929  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.055933  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.055937  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.055940  475694 command_runner.go:130] >       },
	I1216 04:29:40.055952  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055956  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055959  475694 command_runner.go:130] >     },
	I1216 04:29:40.055961  475694 command_runner.go:130] >     {
	I1216 04:29:40.055968  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.055971  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055976  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.055979  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055983  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055990  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.055998  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.056001  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056005  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.056008  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056012  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056015  475694 command_runner.go:130] >       },
	I1216 04:29:40.056018  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056022  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056024  475694 command_runner.go:130] >     },
	I1216 04:29:40.056027  475694 command_runner.go:130] >     {
	I1216 04:29:40.056033  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.056037  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056043  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.056045  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056049  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056057  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.056065  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.056068  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056072  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.056075  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056079  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056082  475694 command_runner.go:130] >       },
	I1216 04:29:40.056085  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056096  475694 command_runner.go:130] >     },
	I1216 04:29:40.056099  475694 command_runner.go:130] >     {
	I1216 04:29:40.056106  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.056110  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056115  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.056118  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056122  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056130  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.056137  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.056141  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056144  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.056148  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056152  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056155  475694 command_runner.go:130] >     },
	I1216 04:29:40.056158  475694 command_runner.go:130] >     {
	I1216 04:29:40.056164  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.056168  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056173  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.056176  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056180  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056188  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.056204  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.056207  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056211  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.056215  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056218  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056221  475694 command_runner.go:130] >       },
	I1216 04:29:40.056225  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056228  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056231  475694 command_runner.go:130] >     },
	I1216 04:29:40.056233  475694 command_runner.go:130] >     {
	I1216 04:29:40.056240  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.056247  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056251  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.056255  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056259  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056266  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.056278  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.056281  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056285  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.056289  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056293  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.056296  475694 command_runner.go:130] >       },
	I1216 04:29:40.056299  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056303  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.056305  475694 command_runner.go:130] >     }
	I1216 04:29:40.056308  475694 command_runner.go:130] >   ]
	I1216 04:29:40.056312  475694 command_runner.go:130] > }
	I1216 04:29:40.057842  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.057866  475694 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:29:40.057874  475694 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:29:40.058028  475694 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:29:40.058117  475694 ssh_runner.go:195] Run: crio config
	I1216 04:29:40.108801  475694 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 04:29:40.108825  475694 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 04:29:40.108833  475694 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 04:29:40.108837  475694 command_runner.go:130] > #
	I1216 04:29:40.108844  475694 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 04:29:40.108850  475694 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 04:29:40.108857  475694 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 04:29:40.108874  475694 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 04:29:40.108891  475694 command_runner.go:130] > # reload'.
	I1216 04:29:40.108898  475694 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 04:29:40.108905  475694 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 04:29:40.108915  475694 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 04:29:40.108922  475694 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 04:29:40.108925  475694 command_runner.go:130] > [crio]
	I1216 04:29:40.108932  475694 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 04:29:40.108939  475694 command_runner.go:130] > # containers images, in this directory.
	I1216 04:29:40.109485  475694 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 04:29:40.109505  475694 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 04:29:40.110050  475694 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 04:29:40.110069  475694 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 04:29:40.110418  475694 command_runner.go:130] > # imagestore = ""
	I1216 04:29:40.110434  475694 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 04:29:40.110442  475694 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 04:29:40.110623  475694 command_runner.go:130] > # storage_driver = "overlay"
	I1216 04:29:40.110671  475694 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 04:29:40.110692  475694 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 04:29:40.110809  475694 command_runner.go:130] > # storage_option = [
	I1216 04:29:40.110816  475694 command_runner.go:130] > # ]
	I1216 04:29:40.110824  475694 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 04:29:40.110831  475694 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 04:29:40.110973  475694 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 04:29:40.110983  475694 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 04:29:40.111015  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 04:29:40.111021  475694 command_runner.go:130] > # always happen on a node reboot
	I1216 04:29:40.111194  475694 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 04:29:40.111214  475694 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 04:29:40.111221  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 04:29:40.111260  475694 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 04:29:40.111402  475694 command_runner.go:130] > # version_file_persist = ""
	I1216 04:29:40.111414  475694 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 04:29:40.111423  475694 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 04:29:40.111428  475694 command_runner.go:130] > # internal_wipe = true
	I1216 04:29:40.111436  475694 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 04:29:40.111471  475694 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 04:29:40.111604  475694 command_runner.go:130] > # internal_repair = true
	I1216 04:29:40.111614  475694 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 04:29:40.111621  475694 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 04:29:40.111626  475694 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 04:29:40.111750  475694 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 04:29:40.111761  475694 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 04:29:40.111764  475694 command_runner.go:130] > [crio.api]
	I1216 04:29:40.111770  475694 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 04:29:40.111973  475694 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 04:29:40.111983  475694 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 04:29:40.112123  475694 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 04:29:40.112134  475694 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 04:29:40.112139  475694 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 04:29:40.112334  475694 command_runner.go:130] > # stream_port = "0"
	I1216 04:29:40.112344  475694 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 04:29:40.112496  475694 command_runner.go:130] > # stream_enable_tls = false
	I1216 04:29:40.112506  475694 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 04:29:40.112646  475694 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 04:29:40.112658  475694 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 04:29:40.112664  475694 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112790  475694 command_runner.go:130] > # stream_tls_cert = ""
	I1216 04:29:40.112800  475694 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 04:29:40.112806  475694 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112930  475694 command_runner.go:130] > # stream_tls_key = ""
	I1216 04:29:40.112940  475694 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 04:29:40.112947  475694 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 04:29:40.112956  475694 command_runner.go:130] > # automatically pick up the changes.
	I1216 04:29:40.113120  475694 command_runner.go:130] > # stream_tls_ca = ""
	I1216 04:29:40.113148  475694 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113407  475694 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 04:29:40.113455  475694 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113595  475694 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 04:29:40.113624  475694 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 04:29:40.113657  475694 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 04:29:40.113680  475694 command_runner.go:130] > [crio.runtime]
	I1216 04:29:40.113702  475694 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 04:29:40.113736  475694 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 04:29:40.113757  475694 command_runner.go:130] > # "nofile=1024:2048"
	I1216 04:29:40.113777  475694 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 04:29:40.113795  475694 command_runner.go:130] > # default_ulimits = [
	I1216 04:29:40.113822  475694 command_runner.go:130] > # ]
	I1216 04:29:40.113845  475694 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 04:29:40.113998  475694 command_runner.go:130] > # no_pivot = false
	I1216 04:29:40.114026  475694 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 04:29:40.114058  475694 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 04:29:40.114076  475694 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 04:29:40.114109  475694 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 04:29:40.114138  475694 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 04:29:40.114159  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114189  475694 command_runner.go:130] > # conmon = ""
	I1216 04:29:40.114211  475694 command_runner.go:130] > # Cgroup setting for conmon
	I1216 04:29:40.114233  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 04:29:40.114382  475694 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 04:29:40.114414  475694 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 04:29:40.114449  475694 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 04:29:40.114469  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114514  475694 command_runner.go:130] > # conmon_env = [
	I1216 04:29:40.114538  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114560  475694 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 04:29:40.114591  475694 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 04:29:40.114614  475694 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 04:29:40.114632  475694 command_runner.go:130] > # default_env = [
	I1216 04:29:40.114649  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114679  475694 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 04:29:40.114706  475694 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 04:29:40.114884  475694 command_runner.go:130] > # selinux = false
	I1216 04:29:40.114896  475694 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 04:29:40.114903  475694 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 04:29:40.114909  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114913  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.114950  475694 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 04:29:40.114969  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114984  475694 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 04:29:40.115020  475694 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 04:29:40.115046  475694 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 04:29:40.115055  475694 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 04:29:40.115062  475694 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 04:29:40.115067  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115072  475694 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 04:29:40.115077  475694 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 04:29:40.115116  475694 command_runner.go:130] > # the cgroup blockio controller.
	I1216 04:29:40.115133  475694 command_runner.go:130] > # blockio_config_file = ""
	I1216 04:29:40.115175  475694 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 04:29:40.115196  475694 command_runner.go:130] > # blockio parameters.
	I1216 04:29:40.115214  475694 command_runner.go:130] > # blockio_reload = false
	I1216 04:29:40.115235  475694 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 04:29:40.115262  475694 command_runner.go:130] > # irqbalance daemon.
	I1216 04:29:40.115417  475694 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 04:29:40.115505  475694 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 04:29:40.115615  475694 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 04:29:40.115655  475694 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 04:29:40.115678  475694 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 04:29:40.115698  475694 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 04:29:40.115716  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115745  475694 command_runner.go:130] > # rdt_config_file = ""
	I1216 04:29:40.115769  475694 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 04:29:40.115788  475694 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 04:29:40.115822  475694 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 04:29:40.115844  475694 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 04:29:40.115864  475694 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 04:29:40.115884  475694 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 04:29:40.115919  475694 command_runner.go:130] > # will be added.
	I1216 04:29:40.115936  475694 command_runner.go:130] > # default_capabilities = [
	I1216 04:29:40.115952  475694 command_runner.go:130] > # 	"CHOWN",
	I1216 04:29:40.115983  475694 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 04:29:40.116006  475694 command_runner.go:130] > # 	"FSETID",
	I1216 04:29:40.116024  475694 command_runner.go:130] > # 	"FOWNER",
	I1216 04:29:40.116040  475694 command_runner.go:130] > # 	"SETGID",
	I1216 04:29:40.116070  475694 command_runner.go:130] > # 	"SETUID",
	I1216 04:29:40.116112  475694 command_runner.go:130] > # 	"SETPCAP",
	I1216 04:29:40.116150  475694 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 04:29:40.116170  475694 command_runner.go:130] > # 	"KILL",
	I1216 04:29:40.116187  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116209  475694 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 04:29:40.116243  475694 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 04:29:40.116264  475694 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 04:29:40.116284  475694 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 04:29:40.116316  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116336  475694 command_runner.go:130] > default_sysctls = [
	I1216 04:29:40.116352  475694 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 04:29:40.116370  475694 command_runner.go:130] > ]
	I1216 04:29:40.116402  475694 command_runner.go:130] > # List of devices on the host that a
	I1216 04:29:40.116430  475694 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 04:29:40.116449  475694 command_runner.go:130] > # allowed_devices = [
	I1216 04:29:40.116482  475694 command_runner.go:130] > # 	"/dev/fuse",
	I1216 04:29:40.116502  475694 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 04:29:40.116519  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116549  475694 command_runner.go:130] > # List of additional devices. specified as
	I1216 04:29:40.116842  475694 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 04:29:40.116898  475694 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 04:29:40.116921  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116950  475694 command_runner.go:130] > # additional_devices = [
	I1216 04:29:40.116977  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116996  475694 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 04:29:40.117028  475694 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 04:29:40.117054  475694 command_runner.go:130] > # 	"/etc/cdi",
	I1216 04:29:40.117101  475694 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 04:29:40.117118  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117139  475694 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 04:29:40.117174  475694 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 04:29:40.117193  475694 command_runner.go:130] > # Defaults to false.
	I1216 04:29:40.117222  475694 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 04:29:40.117264  475694 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 04:29:40.117284  475694 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 04:29:40.117301  475694 command_runner.go:130] > # hooks_dir = [
	I1216 04:29:40.117338  475694 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 04:29:40.117357  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117377  475694 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 04:29:40.117412  475694 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 04:29:40.117421  475694 command_runner.go:130] > # its default mounts from the following two files:
	I1216 04:29:40.117425  475694 command_runner.go:130] > #
	I1216 04:29:40.117432  475694 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 04:29:40.117438  475694 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 04:29:40.117444  475694 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 04:29:40.117447  475694 command_runner.go:130] > #
	I1216 04:29:40.117454  475694 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 04:29:40.117461  475694 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 04:29:40.117467  475694 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 04:29:40.117517  475694 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 04:29:40.117534  475694 command_runner.go:130] > #
	I1216 04:29:40.117567  475694 command_runner.go:130] > # default_mounts_file = ""
	I1216 04:29:40.117599  475694 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 04:29:40.117644  475694 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 04:29:40.117670  475694 command_runner.go:130] > # pids_limit = -1
	I1216 04:29:40.117691  475694 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 04:29:40.117725  475694 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 04:29:40.117753  475694 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 04:29:40.117773  475694 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 04:29:40.117806  475694 command_runner.go:130] > # log_size_max = -1
	I1216 04:29:40.117830  475694 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 04:29:40.117850  475694 command_runner.go:130] > # log_to_journald = false
	I1216 04:29:40.117889  475694 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 04:29:40.117908  475694 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 04:29:40.117927  475694 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 04:29:40.117963  475694 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 04:29:40.117992  475694 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 04:29:40.118011  475694 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 04:29:40.118045  475694 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 04:29:40.118064  475694 command_runner.go:130] > # read_only = false
	I1216 04:29:40.118085  475694 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 04:29:40.118118  475694 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 04:29:40.118145  475694 command_runner.go:130] > # live configuration reload.
	I1216 04:29:40.118163  475694 command_runner.go:130] > # log_level = "info"
	I1216 04:29:40.118200  475694 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 04:29:40.118229  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.118246  475694 command_runner.go:130] > # log_filter = ""
	I1216 04:29:40.118284  475694 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118305  475694 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 04:29:40.118324  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118360  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118379  475694 command_runner.go:130] > # uid_mappings = ""
	I1216 04:29:40.118400  475694 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118433  475694 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 04:29:40.118453  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118475  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118516  475694 command_runner.go:130] > # gid_mappings = ""
	I1216 04:29:40.118547  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 04:29:40.118581  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118608  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.118630  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118663  475694 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 04:29:40.118694  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 04:29:40.118716  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118867  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.119059  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.119080  475694 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 04:29:40.119119  475694 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 04:29:40.119149  475694 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 04:29:40.119169  475694 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 04:29:40.119206  475694 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 04:29:40.119228  475694 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 04:29:40.119249  475694 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 04:29:40.119286  475694 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 04:29:40.119304  475694 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 04:29:40.119323  475694 command_runner.go:130] > # drop_infra_ctr = true
	I1216 04:29:40.119357  475694 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 04:29:40.119378  475694 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 04:29:40.119425  475694 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 04:29:40.119453  475694 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 04:29:40.119476  475694 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 04:29:40.119511  475694 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 04:29:40.119541  475694 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 04:29:40.119560  475694 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 04:29:40.119590  475694 command_runner.go:130] > # shared_cpuset = ""
	I1216 04:29:40.119612  475694 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 04:29:40.119632  475694 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 04:29:40.119663  475694 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 04:29:40.119695  475694 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 04:29:40.119739  475694 command_runner.go:130] > # pinns_path = ""
	I1216 04:29:40.119766  475694 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 04:29:40.119787  475694 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 04:29:40.119820  475694 command_runner.go:130] > # enable_criu_support = true
	I1216 04:29:40.119849  475694 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 04:29:40.119870  475694 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 04:29:40.119901  475694 command_runner.go:130] > # enable_pod_events = false
	I1216 04:29:40.119923  475694 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 04:29:40.119945  475694 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 04:29:40.119977  475694 command_runner.go:130] > # default_runtime = "crun"
	I1216 04:29:40.120005  475694 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 04:29:40.120029  475694 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 04:29:40.120074  475694 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 04:29:40.120094  475694 command_runner.go:130] > # creation as a file is not desired either.
	I1216 04:29:40.120134  475694 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 04:29:40.120162  475694 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 04:29:40.120182  475694 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 04:29:40.120216  475694 command_runner.go:130] > # ]
	I1216 04:29:40.120248  475694 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 04:29:40.120270  475694 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 04:29:40.120320  475694 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 04:29:40.120347  475694 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 04:29:40.120396  475694 command_runner.go:130] > #
	I1216 04:29:40.120416  475694 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 04:29:40.120435  475694 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 04:29:40.120469  475694 command_runner.go:130] > # runtime_type = "oci"
	I1216 04:29:40.120490  475694 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 04:29:40.120514  475694 command_runner.go:130] > # inherit_default_runtime = false
	I1216 04:29:40.120552  475694 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 04:29:40.120570  475694 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 04:29:40.120589  475694 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 04:29:40.120618  475694 command_runner.go:130] > # monitor_env = []
	I1216 04:29:40.120639  475694 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 04:29:40.120667  475694 command_runner.go:130] > # allowed_annotations = []
	I1216 04:29:40.120700  475694 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 04:29:40.120720  475694 command_runner.go:130] > # no_sync_log = false
	I1216 04:29:40.120739  475694 command_runner.go:130] > # default_annotations = {}
	I1216 04:29:40.120771  475694 command_runner.go:130] > # stream_websockets = false
	I1216 04:29:40.120795  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.120859  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.120892  475694 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 04:29:40.120926  475694 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 04:29:40.120956  475694 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 04:29:40.120976  475694 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 04:29:40.121008  475694 command_runner.go:130] > #   in $PATH.
	I1216 04:29:40.121038  475694 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 04:29:40.121057  475694 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 04:29:40.121115  475694 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 04:29:40.121133  475694 command_runner.go:130] > #   state.
	I1216 04:29:40.121155  475694 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 04:29:40.121189  475694 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 04:29:40.121228  475694 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 04:29:40.121250  475694 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 04:29:40.121270  475694 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 04:29:40.121300  475694 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 04:29:40.121328  475694 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 04:29:40.121349  475694 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 04:29:40.121370  475694 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 04:29:40.121404  475694 command_runner.go:130] > #   The currently recognized values are:
	I1216 04:29:40.121434  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 04:29:40.121457  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 04:29:40.121484  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 04:29:40.121518  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 04:29:40.121541  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 04:29:40.121564  475694 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 04:29:40.121592  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 04:29:40.121620  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 04:29:40.121640  475694 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 04:29:40.121671  475694 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 04:29:40.121692  475694 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 04:29:40.121712  475694 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 04:29:40.121747  475694 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 04:29:40.121775  475694 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 04:29:40.121796  475694 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 04:29:40.121818  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 04:29:40.121849  475694 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 04:29:40.121873  475694 command_runner.go:130] > #   deprecated option "conmon".
	I1216 04:29:40.121896  475694 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 04:29:40.121916  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 04:29:40.121945  475694 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 04:29:40.121969  475694 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 04:29:40.121989  475694 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 04:29:40.122009  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 04:29:40.122039  475694 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 04:29:40.122065  475694 command_runner.go:130] > #   conmon-rs by using:
	I1216 04:29:40.122085  475694 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 04:29:40.122108  475694 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 04:29:40.122138  475694 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 04:29:40.122166  475694 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 04:29:40.122184  475694 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 04:29:40.122204  475694 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 04:29:40.122236  475694 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 04:29:40.122262  475694 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 04:29:40.122285  475694 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 04:29:40.122332  475694 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 04:29:40.122360  475694 command_runner.go:130] > #   when a machine crash happens.
	I1216 04:29:40.122382  475694 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 04:29:40.122406  475694 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 04:29:40.122443  475694 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 04:29:40.122473  475694 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 04:29:40.122495  475694 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 04:29:40.122537  475694 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 04:29:40.122553  475694 command_runner.go:130] > #
	I1216 04:29:40.122572  475694 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 04:29:40.122589  475694 command_runner.go:130] > #
	I1216 04:29:40.122624  475694 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 04:29:40.122646  475694 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 04:29:40.122662  475694 command_runner.go:130] > #
	I1216 04:29:40.122693  475694 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 04:29:40.122721  475694 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 04:29:40.122737  475694 command_runner.go:130] > #
	I1216 04:29:40.122758  475694 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 04:29:40.122777  475694 command_runner.go:130] > # feature.
	I1216 04:29:40.122810  475694 command_runner.go:130] > #
	I1216 04:29:40.122842  475694 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 04:29:40.122863  475694 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 04:29:40.122893  475694 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 04:29:40.122913  475694 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 04:29:40.122933  475694 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 04:29:40.122960  475694 command_runner.go:130] > #
	I1216 04:29:40.122986  475694 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 04:29:40.123006  475694 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 04:29:40.123023  475694 command_runner.go:130] > #
	I1216 04:29:40.123043  475694 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 04:29:40.123079  475694 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 04:29:40.123096  475694 command_runner.go:130] > #
	I1216 04:29:40.123117  475694 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 04:29:40.123147  475694 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 04:29:40.123171  475694 command_runner.go:130] > # limitation.
	I1216 04:29:40.123187  475694 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 04:29:40.123204  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 04:29:40.123225  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123264  475694 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 04:29:40.123284  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123302  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123331  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123357  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123375  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123394  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123413  475694 command_runner.go:130] > allowed_annotations = [
	I1216 04:29:40.123445  475694 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 04:29:40.123463  475694 command_runner.go:130] > ]
	I1216 04:29:40.123482  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123501  475694 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 04:29:40.123534  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 04:29:40.123552  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123570  475694 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 04:29:40.123589  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123625  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123644  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123670  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123707  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123742  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123785  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123815  475694 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 04:29:40.123837  475694 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 04:29:40.123859  475694 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 04:29:40.123892  475694 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 04:29:40.123918  475694 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 04:29:40.123943  475694 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 04:29:40.123978  475694 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 04:29:40.123998  475694 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 04:29:40.124022  475694 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 04:29:40.124054  475694 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 04:29:40.124075  475694 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 04:29:40.124108  475694 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 04:29:40.124142  475694 command_runner.go:130] > # Example:
	I1216 04:29:40.124163  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 04:29:40.124183  475694 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 04:29:40.124217  475694 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 04:29:40.124245  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 04:29:40.124262  475694 command_runner.go:130] > # cpuset = "0-1"
	I1216 04:29:40.124279  475694 command_runner.go:130] > # cpushares = "5"
	I1216 04:29:40.124296  475694 command_runner.go:130] > # cpuquota = "1000"
	I1216 04:29:40.124329  475694 command_runner.go:130] > # cpuperiod = "100000"
	I1216 04:29:40.124347  475694 command_runner.go:130] > # cpulimit = "35"
	I1216 04:29:40.124367  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.124385  475694 command_runner.go:130] > # The workload name is workload-type.
	I1216 04:29:40.124421  475694 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 04:29:40.124440  475694 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 04:29:40.124460  475694 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 04:29:40.124492  475694 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 04:29:40.124517  475694 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 04:29:40.124536  475694 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 04:29:40.124556  475694 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 04:29:40.124575  475694 command_runner.go:130] > # Default value is set to true
	I1216 04:29:40.124610  475694 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 04:29:40.124630  475694 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 04:29:40.124649  475694 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 04:29:40.124667  475694 command_runner.go:130] > # Default value is set to 'false'
	I1216 04:29:40.124699  475694 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 04:29:40.124718  475694 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 04:29:40.124741  475694 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 04:29:40.124768  475694 command_runner.go:130] > # timezone = ""
	I1216 04:29:40.124795  475694 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 04:29:40.124810  475694 command_runner.go:130] > #
	I1216 04:29:40.124829  475694 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 04:29:40.124850  475694 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 04:29:40.124892  475694 command_runner.go:130] > [crio.image]
	I1216 04:29:40.124912  475694 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 04:29:40.124930  475694 command_runner.go:130] > # default_transport = "docker://"
	I1216 04:29:40.124959  475694 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 04:29:40.125019  475694 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125026  475694 command_runner.go:130] > # global_auth_file = ""
	I1216 04:29:40.125031  475694 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 04:29:40.125036  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125041  475694 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.125093  475694 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 04:29:40.125106  475694 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125111  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125121  475694 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 04:29:40.125127  475694 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 04:29:40.125133  475694 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 04:29:40.125139  475694 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 04:29:40.125145  475694 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 04:29:40.125160  475694 command_runner.go:130] > # pause_command = "/pause"
	I1216 04:29:40.125167  475694 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 04:29:40.125172  475694 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 04:29:40.125178  475694 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 04:29:40.125184  475694 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 04:29:40.125190  475694 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 04:29:40.125198  475694 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 04:29:40.125209  475694 command_runner.go:130] > # pinned_images = [
	I1216 04:29:40.125213  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125219  475694 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 04:29:40.125226  475694 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 04:29:40.125232  475694 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 04:29:40.125238  475694 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 04:29:40.125243  475694 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 04:29:40.125248  475694 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 04:29:40.125253  475694 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 04:29:40.125268  475694 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 04:29:40.125275  475694 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 04:29:40.125281  475694 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 04:29:40.125287  475694 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 04:29:40.125291  475694 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 04:29:40.125298  475694 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 04:29:40.125304  475694 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 04:29:40.125308  475694 command_runner.go:130] > # changing them here.
	I1216 04:29:40.125313  475694 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 04:29:40.125317  475694 command_runner.go:130] > # insecure_registries = [
	I1216 04:29:40.125325  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125331  475694 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 04:29:40.125338  475694 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 04:29:40.125343  475694 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 04:29:40.125348  475694 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 04:29:40.125352  475694 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 04:29:40.125358  475694 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 04:29:40.125365  475694 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 04:29:40.125369  475694 command_runner.go:130] > # auto_reload_registries = false
	I1216 04:29:40.125375  475694 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 04:29:40.125386  475694 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 04:29:40.125392  475694 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 04:29:40.125396  475694 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 04:29:40.125400  475694 command_runner.go:130] > # The mode of short name resolution.
	I1216 04:29:40.125406  475694 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 04:29:40.125414  475694 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 04:29:40.125419  475694 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 04:29:40.125422  475694 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 04:29:40.125428  475694 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 04:29:40.125435  475694 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 04:29:40.125439  475694 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 04:29:40.125445  475694 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 04:29:40.125449  475694 command_runner.go:130] > # CNI plugins.
	I1216 04:29:40.125456  475694 command_runner.go:130] > [crio.network]
	I1216 04:29:40.125462  475694 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 04:29:40.125467  475694 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 04:29:40.125471  475694 command_runner.go:130] > # cni_default_network = ""
	I1216 04:29:40.125476  475694 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 04:29:40.125481  475694 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 04:29:40.125487  475694 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 04:29:40.125498  475694 command_runner.go:130] > # plugin_dirs = [
	I1216 04:29:40.125501  475694 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 04:29:40.125504  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125508  475694 command_runner.go:130] > # List of included pod metrics.
	I1216 04:29:40.125512  475694 command_runner.go:130] > # included_pod_metrics = [
	I1216 04:29:40.125515  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125521  475694 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 04:29:40.125524  475694 command_runner.go:130] > [crio.metrics]
	I1216 04:29:40.125529  475694 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 04:29:40.125533  475694 command_runner.go:130] > # enable_metrics = false
	I1216 04:29:40.125537  475694 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 04:29:40.125542  475694 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 04:29:40.125549  475694 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 04:29:40.125557  475694 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 04:29:40.125564  475694 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 04:29:40.125568  475694 command_runner.go:130] > # metrics_collectors = [
	I1216 04:29:40.125572  475694 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 04:29:40.125576  475694 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 04:29:40.125580  475694 command_runner.go:130] > # 	"containers_oom_total",
	I1216 04:29:40.125584  475694 command_runner.go:130] > # 	"processes_defunct",
	I1216 04:29:40.125587  475694 command_runner.go:130] > # 	"operations_total",
	I1216 04:29:40.125591  475694 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 04:29:40.125596  475694 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 04:29:40.125600  475694 command_runner.go:130] > # 	"operations_errors_total",
	I1216 04:29:40.125604  475694 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 04:29:40.125608  475694 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 04:29:40.125615  475694 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 04:29:40.125619  475694 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 04:29:40.125623  475694 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 04:29:40.125627  475694 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 04:29:40.125632  475694 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 04:29:40.125636  475694 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 04:29:40.125640  475694 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 04:29:40.125643  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125649  475694 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 04:29:40.125653  475694 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 04:29:40.125658  475694 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 04:29:40.125662  475694 command_runner.go:130] > # metrics_port = 9090
	I1216 04:29:40.125667  475694 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 04:29:40.125670  475694 command_runner.go:130] > # metrics_socket = ""
	I1216 04:29:40.125678  475694 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 04:29:40.125684  475694 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 04:29:40.125690  475694 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 04:29:40.125694  475694 command_runner.go:130] > # certificate on any modification event.
	I1216 04:29:40.125698  475694 command_runner.go:130] > # metrics_cert = ""
	I1216 04:29:40.125703  475694 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 04:29:40.125708  475694 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 04:29:40.125711  475694 command_runner.go:130] > # metrics_key = ""
	I1216 04:29:40.125718  475694 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 04:29:40.125721  475694 command_runner.go:130] > [crio.tracing]
	I1216 04:29:40.125726  475694 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 04:29:40.125730  475694 command_runner.go:130] > # enable_tracing = false
	I1216 04:29:40.125735  475694 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 04:29:40.125740  475694 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 04:29:40.125747  475694 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 04:29:40.125753  475694 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 04:29:40.125757  475694 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 04:29:40.125760  475694 command_runner.go:130] > [crio.nri]
	I1216 04:29:40.125764  475694 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 04:29:40.125772  475694 command_runner.go:130] > # enable_nri = true
	I1216 04:29:40.125776  475694 command_runner.go:130] > # NRI socket to listen on.
	I1216 04:29:40.125781  475694 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 04:29:40.125785  475694 command_runner.go:130] > # NRI plugin directory to use.
	I1216 04:29:40.125789  475694 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 04:29:40.125794  475694 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 04:29:40.125799  475694 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 04:29:40.125804  475694 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 04:29:40.125861  475694 command_runner.go:130] > # nri_disable_connections = false
	I1216 04:29:40.125867  475694 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 04:29:40.125871  475694 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 04:29:40.125876  475694 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 04:29:40.125881  475694 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 04:29:40.125885  475694 command_runner.go:130] > # NRI default validator configuration.
	I1216 04:29:40.125892  475694 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 04:29:40.125898  475694 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 04:29:40.125902  475694 command_runner.go:130] > # can be restricted/rejected:
	I1216 04:29:40.125905  475694 command_runner.go:130] > # - OCI hook injection
	I1216 04:29:40.125910  475694 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 04:29:40.125915  475694 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 04:29:40.125919  475694 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 04:29:40.125923  475694 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 04:29:40.125929  475694 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 04:29:40.125936  475694 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 04:29:40.125941  475694 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 04:29:40.125944  475694 command_runner.go:130] > #
	I1216 04:29:40.125948  475694 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 04:29:40.125953  475694 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 04:29:40.125958  475694 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 04:29:40.125963  475694 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 04:29:40.125969  475694 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 04:29:40.125974  475694 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 04:29:40.125979  475694 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 04:29:40.125986  475694 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 04:29:40.125991  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125996  475694 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 04:29:40.126002  475694 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 04:29:40.126007  475694 command_runner.go:130] > [crio.stats]
	I1216 04:29:40.126013  475694 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 04:29:40.126018  475694 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 04:29:40.126022  475694 command_runner.go:130] > # stats_collection_period = 0
	I1216 04:29:40.126028  475694 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 04:29:40.126034  475694 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 04:29:40.126038  475694 command_runner.go:130] > # collection_period = 0
	I1216 04:29:40.126084  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086834829Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 04:29:40.126093  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086875912Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 04:29:40.126103  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086913837Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 04:29:40.126111  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086943031Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 04:29:40.126123  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087027733Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:40.126132  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087362399Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 04:29:40.126142  475694 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 04:29:40.126226  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:40.126235  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:40.126255  475694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:29:40.126277  475694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:29:40.126422  475694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:29:40.126497  475694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:29:40.134815  475694 command_runner.go:130] > kubeadm
	I1216 04:29:40.134839  475694 command_runner.go:130] > kubectl
	I1216 04:29:40.134844  475694 command_runner.go:130] > kubelet
	I1216 04:29:40.134872  475694 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:29:40.134932  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:29:40.143529  475694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:29:40.156375  475694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:29:40.169188  475694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 04:29:40.182223  475694 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:29:40.185968  475694 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:29:40.186105  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:40.327743  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:41.068736  475694 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:29:41.068757  475694 certs.go:195] generating shared ca certs ...
	I1216 04:29:41.068779  475694 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.069050  475694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:29:41.069145  475694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:29:41.069172  475694 certs.go:257] generating profile certs ...
	I1216 04:29:41.069366  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:29:41.069439  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:29:41.069492  475694 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:29:41.069508  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:29:41.069527  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:29:41.069550  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:29:41.069568  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:29:41.069598  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:29:41.069624  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:29:41.069636  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:29:41.069661  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:29:41.069722  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:29:41.069792  475694 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:29:41.069804  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:29:41.069832  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:29:41.069864  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:29:41.069933  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:29:41.070011  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:41.070050  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.070068  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.070082  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem -> /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.070740  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:29:41.088516  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:29:41.106273  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:29:41.124169  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:29:41.142346  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:29:41.160632  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:29:41.181690  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:29:41.199949  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:29:41.217789  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:29:41.237601  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:29:41.255073  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:29:41.272738  475694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:29:41.286149  475694 ssh_runner.go:195] Run: openssl version
	I1216 04:29:41.292023  475694 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:29:41.292477  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.299852  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:29:41.307795  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312150  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312182  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312250  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.353168  475694 command_runner.go:130] > 3ec20f2e
	I1216 04:29:41.353674  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:29:41.362516  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.370150  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:29:41.377841  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381956  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381986  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.382040  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.422880  475694 command_runner.go:130] > b5213941
	I1216 04:29:41.423347  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:29:41.430980  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.438640  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:29:41.446570  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450618  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450691  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450770  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.493534  475694 command_runner.go:130] > 51391683
	I1216 04:29:41.494044  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:29:41.501730  475694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505651  475694 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505723  475694 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:29:41.505736  475694 command_runner.go:130] > Device: 259,1	Inode: 1313043     Links: 1
	I1216 04:29:41.505744  475694 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:41.505751  475694 command_runner.go:130] > Access: 2025-12-16 04:25:32.918538317 +0000
	I1216 04:29:41.505756  475694 command_runner.go:130] > Modify: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505760  475694 command_runner.go:130] > Change: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505765  475694 command_runner.go:130] >  Birth: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505860  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:29:41.547026  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.547554  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:29:41.588926  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.589431  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:29:41.630503  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.630976  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:29:41.679374  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.679872  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:29:41.720872  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.720962  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:29:41.763843  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.764306  475694 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:41.764397  475694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:29:41.764473  475694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:29:41.794813  475694 cri.go:89] found id: ""
	I1216 04:29:41.795018  475694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:29:41.802238  475694 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:29:41.802260  475694 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:29:41.802267  475694 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:29:41.803148  475694 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:29:41.803169  475694 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:29:41.803241  475694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:29:41.810442  475694 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:29:41.810892  475694 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-763073" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811005  475694 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-438353/kubeconfig needs updating (will repair): [kubeconfig missing "functional-763073" cluster setting kubeconfig missing "functional-763073" context setting]
	I1216 04:29:41.811272  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.811696  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811844  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.812430  475694 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:29:41.812449  475694 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:29:41.812455  475694 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:29:41.812459  475694 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:29:41.812464  475694 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:29:41.812504  475694 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:29:41.812753  475694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:29:41.827245  475694 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 04:29:41.827324  475694 kubeadm.go:602] duration metric: took 24.148626ms to restartPrimaryControlPlane
	I1216 04:29:41.827348  475694 kubeadm.go:403] duration metric: took 63.050551ms to StartCluster
	I1216 04:29:41.827392  475694 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.827497  475694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.828225  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.828522  475694 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:29:41.828868  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:41.828926  475694 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:29:41.829003  475694 addons.go:70] Setting storage-provisioner=true in profile "functional-763073"
	I1216 04:29:41.829025  475694 addons.go:239] Setting addon storage-provisioner=true in "functional-763073"
	I1216 04:29:41.829051  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.829717  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.829866  475694 addons.go:70] Setting default-storageclass=true in profile "functional-763073"
	I1216 04:29:41.829889  475694 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-763073"
	I1216 04:29:41.830179  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.835425  475694 out.go:179] * Verifying Kubernetes components...
	I1216 04:29:41.843204  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:41.852282  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.852487  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.852847  475694 addons.go:239] Setting addon default-storageclass=true in "functional-763073"
	I1216 04:29:41.852883  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.853441  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.902066  475694 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:29:41.905129  475694 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:41.905181  475694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:29:41.905276  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.908977  475694 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:41.909002  475694 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:29:41.909132  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.960105  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:41.975058  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:42.043859  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:42.092471  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:42.106008  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:42.818195  475694 node_ready.go:35] waiting up to 6m0s for node "functional-763073" to be "Ready" ...
	I1216 04:29:42.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:29:42.818432  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:42.818659  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818682  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818701  475694 retry.go:31] will retry after 327.643243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818740  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818752  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818759  475694 retry.go:31] will retry after 171.339125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:42.990327  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.052462  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.052555  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.052597  475694 retry.go:31] will retry after 320.089446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.146742  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.207665  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.212209  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.212243  475694 retry.go:31] will retry after 291.464307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.318472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.318814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.373308  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.435189  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.435254  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.435280  475694 retry.go:31] will retry after 781.758867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.504448  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.571334  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.571371  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.571390  475694 retry.go:31] will retry after 332.937553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.818906  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.818991  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.819297  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.904706  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.962384  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.966307  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.966396  475694 retry.go:31] will retry after 1.136896719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.217759  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:44.279618  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:44.283381  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.283415  475694 retry.go:31] will retry after 1.1051557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.318552  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.318673  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.319015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:44.818498  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.818571  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:44.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:45.103534  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:45.194787  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.195010  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.195099  475694 retry.go:31] will retry after 1.211699823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.319146  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.319235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.319562  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:45.388763  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:45.456804  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.456849  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.456877  475694 retry.go:31] will retry after 720.865488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.819295  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.819381  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.819670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.178239  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:46.241684  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.241730  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.241750  475694 retry.go:31] will retry after 2.398929444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.318930  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.319008  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.319303  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.407630  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:46.476894  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.476941  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.476959  475694 retry.go:31] will retry after 1.300502308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.818702  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.819124  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:46.819187  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:47.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.318594  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.778651  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:47.819040  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.819112  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.836852  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:47.840282  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:47.840312  475694 retry.go:31] will retry after 3.994114703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.318482  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.318555  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:48.641498  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:48.705855  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:48.705903  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.705923  475694 retry.go:31] will retry after 1.757515206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.819100  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.819185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.819457  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:48.819514  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:49.319285  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.319697  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:49.819385  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.318415  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.318509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.464331  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:50.523255  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:50.523310  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.523330  475694 retry.go:31] will retry after 5.029530817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.318457  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:51.318895  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:51.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.819120  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.834846  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:51.906733  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:51.906789  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:51.906807  475694 retry.go:31] will retry after 4.132534587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:52.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.319782  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:52.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.818481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.818820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.318781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.818436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:53.818768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:54.318470  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.318855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:54.818416  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.818791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.318563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.318906  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.553265  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:55.626702  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:55.630832  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.630867  475694 retry.go:31] will retry after 7.132223529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.819263  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.819349  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.819703  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:55.819756  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:56.040181  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:56.104678  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:56.104716  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.104735  475694 retry.go:31] will retry after 8.857583825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.319119  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.319453  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:56.819390  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.819466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.819757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.319466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.818722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:58.319396  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.319513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.319927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:58.319980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:58.818648  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.818727  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.318403  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.818568  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.318660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.818779  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.818900  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.819255  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:00.819314  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:01.318812  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.318904  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.319269  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:01.818988  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.819066  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.819335  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.319195  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.319286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.763349  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:02.818891  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.818969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.819274  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.830785  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:02.830835  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:02.830855  475694 retry.go:31] will retry after 11.115111011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:03.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.318492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:03.318795  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:03.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.818567  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.318356  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.318791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.819354  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.819425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.963132  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:05.030528  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:05.030573  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.030594  475694 retry.go:31] will retry after 13.807129774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.319025  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.319109  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.319430  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:05.319487  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:05.819077  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.819160  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.819454  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.319216  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.319298  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.319561  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.818566  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.818640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.818960  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.319006  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.319080  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.319410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.819153  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.819235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.819526  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:07.819580  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:08.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.319439  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.319857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:08.818460  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.318512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.318769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.818489  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.818572  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:10.318546  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.319011  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:10.319072  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:10.818626  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.819016  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.818916  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.818993  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.819322  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:12.319122  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.319197  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.319465  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:12.319515  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:12.819218  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.819289  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.819619  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.318424  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.318745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.946231  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:14.010550  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:14.014827  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.014869  475694 retry.go:31] will retry after 8.112010712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.319336  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.319410  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.319731  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:14.319784  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:14.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.818426  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.818781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.319376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.319444  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.319700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.818487  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:16.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.319765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:16.319823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:16.818746  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.818828  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.819089  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.318878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.818576  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.818652  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.818985  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.318670  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.319008  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:18.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:18.838055  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:18.893739  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:18.897596  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:18.897631  475694 retry.go:31] will retry after 11.366080685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:19.319301  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.319380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.319681  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:19.819376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.819724  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.818403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:21.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:21.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:21.818866  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.818958  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.819324  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.127748  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:22.189082  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:22.189129  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.189148  475694 retry.go:31] will retry after 27.844564007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.319433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.818358  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.818435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.818698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:23.319415  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.319492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.319809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:23.319865  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:23.818531  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.818610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.818962  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.318495  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.318564  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.318816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.818435  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.818856  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.318545  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.318628  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.318920  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.818420  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:25.818900  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:26.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.318905  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:26.818764  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.819183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.318950  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.319026  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.319288  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.819187  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.819262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.819610  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:27.819670  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:28.319414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.319507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.319802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:28.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.818505  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.818767  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.318476  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.318919  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.818707  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.819030  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:30.264789  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:30.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:30.329449  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:30.329484  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.329503  475694 retry.go:31] will retry after 18.349811318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.819293  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.318473  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.318550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.318884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.818872  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.818940  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:32.319072  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.319152  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.319497  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:32.319550  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:32.819264  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.319325  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.319391  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.319698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.318569  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.318644  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.318965  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:34.819051  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:35.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:35.818450  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.818528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.318610  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.318679  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.318948  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.818786  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.818871  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.819206  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:36.819259  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:37.318997  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.319078  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.319374  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:37.819133  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.819207  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.819482  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.319323  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.319397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.319736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.818843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:39.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.318474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.318729  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:39.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:39.818457  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.318693  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.319014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.818414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.818482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:41.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.318542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:41.318917  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:41.819023  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.819096  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.319088  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.319177  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.319455  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.819310  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.819387  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:43.818851  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:44.318448  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.318869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:44.818424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.818501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.318828  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.318911  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.319336  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.819228  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.819306  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.819658  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:45.819718  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:46.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:46.818660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.318699  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.318774  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.319086  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.818830  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:48.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:48.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:48.679520  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:48.741510  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:48.741587  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.741616  475694 retry.go:31] will retry after 29.090794722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.818706  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.818780  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.819102  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.318810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.818809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:50.034416  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:50.096674  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:50.100468  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.100502  475694 retry.go:31] will retry after 39.426681546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.318852  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.318933  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.319214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:50.319264  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:50.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.819159  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.819546  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.319385  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.319643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.818732  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.819127  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.318819  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.318894  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.319218  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.818982  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.819057  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:52.819370  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:53.319110  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.319188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.319511  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:53.819108  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.819188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.819533  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.319331  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.319714  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.818392  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.818470  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.818795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:55.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:55.318874  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:55.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.818691  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.818767  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.819103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.318465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.819813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:57.819868  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:58.318364  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:58.819413  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.819488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.318514  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.818497  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.818583  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.818942  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:00.327892  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.327986  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.328316  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:00.328364  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:00.819096  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.819170  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.319773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.818911  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.819294  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.319118  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.319418  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.819093  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.819166  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.819505  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:02.819563  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:03.319107  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.319185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.319442  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:03.819184  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.819264  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.819590  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.319286  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.319688  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.818746  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:05.318450  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.318528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.318837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:05.318887  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:05.818417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.818534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.318784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.818697  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.818768  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.819055  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:07.319228  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.319300  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.319611  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:07.319663  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:07.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.318858  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.818509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.318534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.318615  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:09.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:10.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:10.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.818634  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.818898  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.818867  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.818943  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.819292  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:11.819345  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:12.319080  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.319411  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:12.819163  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.819597  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.319410  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.319484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.818534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.818607  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:14.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.318819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:14.318867  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:14.818523  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.318504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.818863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.318822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.818718  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.818992  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:16.819042  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:17.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.818831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.833208  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:31:17.902395  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906323  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906439  475694 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:18.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:18.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:19.318591  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.319009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:19.319064  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:19.818355  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.818687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.319419  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.319793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.318502  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.318570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.819058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.819153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.819506  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:21.819565  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:22.319388  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.319835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:22.819358  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.819430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.318411  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:24.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.318789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:24.318840  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:24.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.818448  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.818741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.818645  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.818926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.318381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.318457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.318786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.818755  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.818868  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.819189  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:26.819243  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:27.318982  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.319054  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.319361  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:27.819127  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.819233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.319236  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.819411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:28.819786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:29.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:29.528240  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:31:29.598877  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598918  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598995  475694 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:29.602136  475694 out.go:179] * Enabled addons: 
	I1216 04:31:29.604114  475694 addons.go:530] duration metric: took 1m47.775177414s for enable addons: enabled=[]
	I1216 04:31:29.818770  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.818886  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.819272  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.319147  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.319404  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.819315  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.819674  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:31.318340  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.318743  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:31.318800  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:31.818902  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.819227  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.319058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.319135  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.319508  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.819330  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.819408  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:33.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:33.318863  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:33.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.818785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.318363  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.318438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.318790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.819369  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.819438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.819713  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:35.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.318872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:35.318943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:35.818615  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.818692  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.819009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.818925  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.819003  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:37.319361  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.319459  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.319790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:37.319835  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:37.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.818525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.318529  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.318609  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.318895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:39.818858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:40.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.318507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:40.818376  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.818450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.818707  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.318824  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.319203  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.819296  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.819635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:41.819695  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:42.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.319800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:42.818819  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.818916  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.819270  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.319056  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.319132  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.319459  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.819240  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.819310  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.819650  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:44.319420  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.319840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:44.319896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:44.818558  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.818637  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.818980  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.318674  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.319042  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.818761  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.818837  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.819095  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:46.819145  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:47.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.318515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.318857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:47.818554  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.818627  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.818943  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.318744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.818844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:49.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.318533  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:49.318926  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:49.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.818832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.318907  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.818617  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.818699  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.819034  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:51.318728  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.318799  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.319084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:51.319133  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:51.819260  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.819337  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.819646  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.319367  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.319460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.319796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.818735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.818542  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:53.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:54.318422  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.318498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:54.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.318417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.318540  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:56.318390  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.318813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:56.318866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:56.818718  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.818805  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.819146  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.318413  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.318738  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.818490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:58.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.319808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:58.319866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:58.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.818811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.818868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.318393  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:00.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:01.318351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.318435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:01.818924  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.819034  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.819306  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.319090  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.819160  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.819573  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:02.819634  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.318419  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:03.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.818850  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.818766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:05.318488  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.318585  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.318952  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:05.319013  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:05.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.818766  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.318837  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.318913  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.319181  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.819150  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.819232  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.819586  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:07.319256  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.319343  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.319687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:07.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:07.819375  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.819717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.818488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.818845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.318495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.818410  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.818492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.818839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:09.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:10.318578  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.318664  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.319047  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:10.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.818852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.819114  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.819011  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.819097  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.819452  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:11.819512  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:12.319053  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.319128  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.319419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:12.819173  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.819252  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.819584  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.319194  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.319275  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.319589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.819219  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.819286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.819552  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:13.819595  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:14.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.319816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:14.818518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.818951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:16.318368  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.318450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:16.318842  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:16.818645  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.818981  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.318766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.818482  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:18.318566  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.318640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.318945  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:18.319006  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:18.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.818516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.818842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.318516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.819341  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.819415  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.819722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.318801  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.818494  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:20.818924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:21.318563  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.318632  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.318896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:21.818868  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.818945  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.819262  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.318832  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.318939  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.319249  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.818805  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.818880  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.819174  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:22.819224  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:23.318762  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.318839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.319185  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:23.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.819074  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.819390  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.319208  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.319468  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.819344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:24.819813  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:25.318436  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.318844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:25.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.818705  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.818789  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.819111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:27.318783  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.318852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.319112  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:27.319155  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:27.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.818848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.318875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.818822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.318518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.318617  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:29.819143  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:30.318804  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.318881  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:30.818908  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.819365  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.319128  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.319211  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.319551  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.818625  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.819005  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:32.318377  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.318779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:32.318830  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:32.818478  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.818890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.818487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:34.318540  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.318621  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.318936  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:34.318997  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:34.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.818510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.818779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.818530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.818878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:36.318556  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.318986  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:36.319033  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:36.818822  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.818905  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.819233  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.319068  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.319154  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.319493  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.819197  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.819270  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.819602  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:38.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.319452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:38.319827  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:38.818447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.818527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.318937  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.818731  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.819050  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.318767  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.319183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.818948  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.819022  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.819278  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:40.819323  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:41.319042  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.319117  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.319435  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:41.818623  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.818705  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.819037  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.818437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:43.318459  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.318887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:43.318945  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:43.819357  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.819431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.819742  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.318455  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.818656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:45.318689  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.318765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:45.319110  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:45.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.318349  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.818690  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.818773  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.819032  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.818472  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.818551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:47.818986  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:48.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.319715  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:48.818461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.319434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.819351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.819434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.819700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:49.819743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:50.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.318483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:50.818463  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.818546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.318426  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.318508  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.818955  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.819039  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.819431  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:52.319209  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.319637  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:52.319692  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:52.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.818449  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.818711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.318405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:54.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.319718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:54.319768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:54.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.818896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.318601  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.318680  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.818723  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.818804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.819074  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.318436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.818730  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.818807  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.819167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:56.819227  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:57.318894  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.318969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.319232  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:57.818968  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.819399  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.319214  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.319634  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.819672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:58.819714  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:59.318342  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.318420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:59.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.818911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.319047  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.319356  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.819156  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.819244  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.819576  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:01.319425  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.319520  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.319865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:01.319922  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:01.818853  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.818926  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.319032  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.319108  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.319434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.819246  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.819327  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.319320  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.319661  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.818365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.818761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:03.818823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:04.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.318596  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.318928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:04.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.818807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:05.818960  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:06.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.318787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:06.818785  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.818857  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.819145  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.318817  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.318891  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.818978  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.819056  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.819319  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:07.819368  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:08.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.319217  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.319580  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:08.819296  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:10.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:10.318924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:10.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.818479  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.818769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.318449  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.819090  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:12.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.319221  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.319548  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:12.319601  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:12.819365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.819440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.819754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.318386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.318466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.819148  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.819223  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.819475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:14.319233  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.319642  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:14.319694  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:14.819321  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.819398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.819744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.318773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.318633  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.318712  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.818735  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.819070  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:16.819115  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:17.318776  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.318851  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.319191  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:17.818960  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.819386  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.319157  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.819267  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.819339  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.819652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:18.819699  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:19.319379  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.319785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:19.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.818428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.318802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.818454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.818885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:21.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.318772  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:21.318812  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:21.818938  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.819385  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.319177  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.319262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.319560  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.819291  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.819372  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.819640  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:23.319349  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.319428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.319751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:23.319801  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:23.818459  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.818792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.318944  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.818409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.818745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:25.818786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:26.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:26.818684  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.818758  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.318750  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.318819  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.319109  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.818989  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.819067  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.819405  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:27.819467  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:28.319223  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.319304  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.319635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:28.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.319818  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.818393  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.818474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:30.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.318409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.318735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:30.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.818923  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.818850  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.818935  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:32.319014  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.319087  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.319396  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:32.319454  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:32.819204  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.819281  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.318343  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.318673  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.818346  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.318496  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.318588  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.318954  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.818515  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.818592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.818900  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:34.818954  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:35.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.318547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:35.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.319305  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.319382  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.818685  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.819006  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:36.819059  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:37.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.319017  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:37.819328  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.819394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.819638  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.818700  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.819026  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:38.819078  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:39.318710  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.318778  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.319044  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:39.819409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.819485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.819829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.818412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.818486  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:41.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:41.318906  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:41.819057  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.319351  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.319425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.319803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.818567  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.818642  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.818971  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:43.318717  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.318804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:43.319246  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:43.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.819063  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.318768  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.819022  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.819099  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.819428  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:45.319171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.319254  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.319544  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:45.319590  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:45.819402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.819848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.318583  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.319025  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.818846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.819141  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.318533  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.318611  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.318979  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:47.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:48.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.318751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:48.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.318697  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.319111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.818787  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.818863  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.819129  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:49.819172  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:50.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:50.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.818682  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.819022  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.318712  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.319167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.819070  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.819144  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.819478  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:51.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:52.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.319323  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.319652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:52.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.819441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.318783  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.818549  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:54.319385  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:54.319744  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:54.818347  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.818422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.818747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.318483  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.318963  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.818650  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.818724  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.819014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.318842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.818765  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.818843  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:56.819280  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:57.318987  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.319070  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.319350  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:57.819171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.819249  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.319778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.819329  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.819413  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:58.819797  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.318517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:59.818440  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.818866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.328767  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.328849  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.329179  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.819012  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.819093  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.819419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:01.319182  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.319271  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.319631  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:01.319685  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:01.818692  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.818765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.819031  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.318365  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.318443  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.818471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.818800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.318678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.818431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:03.818824  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:04.319347  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.319423  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:04.818540  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.818608  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.818855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.318534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.318911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.818570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.818899  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:05.818957  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:06.319348  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.319422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.319689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:06.818777  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.818855  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.819214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.319101  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.319438  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.819176  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.819248  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.819494  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:07.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:08.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.319324  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.319660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:08.819334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.819414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.819748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:10.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.318882  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:10.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:10.819332  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.819407  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.318447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.318755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.819077  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.819410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:12.319155  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.319475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:12.319519  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:12.819271  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.819346  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.819689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.318793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.818559  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.818826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.318885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.818639  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.818968  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:14.819021  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:15.318668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.319003  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:15.818382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.318521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.818753  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.818825  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.819126  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:16.819186  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:17.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.318558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:17.818418  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.818784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.818343  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.818802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:19.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:19.318915  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:19.818577  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.818646  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.818927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.318833  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:21.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.319702  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:21.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:21.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.819068  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.819437  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.319208  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.319613  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.819318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.819390  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.819643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.318762  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:23.818927  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:24.318334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.318402  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.318670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:24.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.818790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.318379  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.318455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.818514  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.818579  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:26.318398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:26.318858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:26.818668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.818748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.819069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.319709  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.818413  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:28.318554  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.318951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:28.319002  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:28.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.818493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.818750  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.319469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.319795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.818453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.818867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:30.319342  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.319416  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:30.319711  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:30.818394  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.818849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.818933  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.819001  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.819258  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.319093  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.819401  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:32.819825  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:33.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:33.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.818536  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.318539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.318890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.818406  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:35.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.318826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:35.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:35.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.818828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.318761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.818896  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.819296  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:37.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.318532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.318915  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:37.318974  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:37.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.818687  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.818946  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.318430  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.818653  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.818976  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:39.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.319717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:39.319766  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:39.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.819802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.319399  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.319478  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.319815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.819720  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.318432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.818913  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.818984  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.819332  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:41.819390  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:42.319148  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.319222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.319522  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:42.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.819397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.819739  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.319081  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.818751  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:44.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:44.318860  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:44.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.818518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.818902  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.319407  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.319489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.319845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.818371  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.818447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:46.318536  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.318974  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:46.319036  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:46.818922  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.819000  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.819277  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.319079  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.319486  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.819266  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:48.319327  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.319723  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:48.319773  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:48.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.818771  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.318493  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.318566  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.818551  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.318400  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.818522  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.818600  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.818928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:50.818980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:51.318625  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.318702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.319079  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:51.819046  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.819123  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.319344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.319417  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.319779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.818421  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.818829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:53.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:53.318897  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:53.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.818506  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.319276  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.319352  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.319592  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.819372  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.819794  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.318383  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.318468  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.818467  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.818798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:55.818839  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:56.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.318799  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:56.818695  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.818770  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.819054  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.318729  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.318810  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.319103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:57.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:58.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:58.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.818756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.318870  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.818542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.818859  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:59.818914  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:00.326681  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.327158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.327589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:00.818334  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.818414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.318487  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.318573  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.818952  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.819285  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:01.819326  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:02.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.319233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.319559  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:02.819407  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.819477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.819810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.318360  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.318434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.318682  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.818556  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.818922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:04.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:04.318896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:04.818557  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.818626  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.818950  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.818589  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.818665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.818795  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.818876  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.819216  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:06.819271  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:07.319075  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.319158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.319501  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:07.819216  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.819290  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.819547  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.319297  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.319373  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.319684  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.819455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.819785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:08.819836  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:09.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:09.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.318414  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.318493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.318815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.818498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.818758  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:11.318468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.318548  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:11.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:11.818967  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.819040  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.819370  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.318986  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.319065  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.319377  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.819142  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.819222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.819598  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:13.319413  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.319864  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:13.319929  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:13.818336  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.818409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.818500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.818819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.318797  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:15.818967  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:16.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.318843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:16.818768  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.819094  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.818465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:18.318479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.318546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:18.318849  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:18.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.818776  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.318510  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.318592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.818621  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.818973  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:20.318397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:20.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:20.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.818507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.318544  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.318656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.819053  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.819131  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.819472  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:22.319274  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.319345  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.319672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:22.319728  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:22.818396  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.818467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.318836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.818345  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.818420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.818765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.319370  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.319441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.818553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:24.818962  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:25.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.318430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:25.819340  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.819694  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.319409  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.319480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.319786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.818711  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:26.819158  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:27.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.318803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:27.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.818881  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.318434  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.318510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.318832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.818494  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.818812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:29.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:29.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:29.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.818455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.319321  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.319394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.818400  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.818475  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.818821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:31.318538  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.318610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.318926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:31.318982  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:31.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.819402  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.319162  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.319242  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.319568  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.819397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.819471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.819805  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.318749  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.818824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:33.818882  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:34.318388  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.318473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.318868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:34.819425  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.819500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.819756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.318883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.818457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:36.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.319450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.319711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:36.319751  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:36.818719  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.818823  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.819149  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.318863  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.318957  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.319340  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.819103  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.819178  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.819440  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.318528  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.318602  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.318927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:38.818930  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:39.318332  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.318414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.318736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:39.818477  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.818480  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.818825  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:41.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.318879  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:41.318931  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:41.818408  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.319418  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.319504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:42.319849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.818432  475694 node_ready.go:38] duration metric: took 6m0.000197669s for node "functional-763073" to be "Ready" ...
	I1216 04:35:42.821511  475694 out.go:203] 
	W1216 04:35:42.824400  475694 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:35:42.824420  475694 out.go:285] * 
	W1216 04:35:42.826578  475694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:35:42.829442  475694 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:35:51 functional-763073 crio[5388]: time="2025-12-16T04:35:51.644592632Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=991b5227-f44c-4be8-8368-76a81108b71f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703121678Z" level=info msg="Checking image status: minikube-local-cache-test:functional-763073" id=ce6da041-c693-4ed8-8f67-0e2dfa5f474c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703334161Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703408098Z" level=info msg="Image minikube-local-cache-test:functional-763073 not found" id=ce6da041-c693-4ed8-8f67-0e2dfa5f474c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703501522Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-763073 found" id=ce6da041-c693-4ed8-8f67-0e2dfa5f474c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.729422385Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-763073" id=9c3b6678-6461-4136-9494-36a2f286b515 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.729581123Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-763073 not found" id=9c3b6678-6461-4136-9494-36a2f286b515 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.729629378Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-763073 found" id=9c3b6678-6461-4136-9494-36a2f286b515 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.751808181Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-763073" id=f9a21605-057b-4ce8-98f7-3f87460344d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.751965918Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-763073 not found" id=f9a21605-057b-4ce8-98f7-3f87460344d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.752023272Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-763073 found" id=f9a21605-057b-4ce8-98f7-3f87460344d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:53 functional-763073 crio[5388]: time="2025-12-16T04:35:53.723362934Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1254b668-94d1-4907-b41e-bfc70228cac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.05968147Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b570985a-7c53-4562-aefb-ce4eaac2ce51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.059823084Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b570985a-7c53-4562-aefb-ce4eaac2ce51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.059872627Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b570985a-7c53-4562-aefb-ce4eaac2ce51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.588213855Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=daeb8e9e-5767-459f-8fa3-2d940dcac344 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.588363969Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=daeb8e9e-5767-459f-8fa3-2d940dcac344 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.588402181Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=daeb8e9e-5767-459f-8fa3-2d940dcac344 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.612801355Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1df86d52-c634-46b1-b725-aac31e36969b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.613140189Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1df86d52-c634-46b1-b725-aac31e36969b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.613186204Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1df86d52-c634-46b1-b725-aac31e36969b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.64335223Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a4a54ea3-fab4-4dfc-8131-481d19907593 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.643515965Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a4a54ea3-fab4-4dfc-8131-481d19907593 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.643556113Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a4a54ea3-fab4-4dfc-8131-481d19907593 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:55 functional-763073 crio[5388]: time="2025-12-16T04:35:55.219732281Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8e58da4f-b02a-4dba-9994-86e50eea8261 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:35:56.746968    9429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:56.747750    9429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:56.749855    9429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:56.750818    9429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:56.752422    9429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:35:56 up  3:18,  0 user,  load average: 0.85, 0.41, 0.81
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:35:54 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:55 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 16 04:35:55 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:55 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:55 functional-763073 kubelet[9303]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:55 functional-763073 kubelet[9303]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:55 functional-763073 kubelet[9303]: E1216 04:35:55.139578    9303 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:55 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:55 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:55 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 16 04:35:55 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:55 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:55 functional-763073 kubelet[9339]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:55 functional-763073 kubelet[9339]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:55 functional-763073 kubelet[9339]: E1216 04:35:55.883579    9339 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:55 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:55 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:56 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 16 04:35:56 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:56 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:56 functional-763073 kubelet[9402]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:56 functional-763073 kubelet[9402]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:56 functional-763073 kubelet[9402]: E1216 04:35:56.644499    9402 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:56 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:56 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (334.420138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-763073 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-763073 get pods: exit status 1 (108.050397ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-763073 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (304.275675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 logs -n 25: (1.025793706s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr                                            │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format json --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format table --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls                                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ delete         │ -p functional-861171                                                                                                                              │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ start          │ -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ start          │ -p functional-763073 --alsologtostderr -v=8                                                                                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │                     │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:latest                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add minikube-local-cache-test:functional-763073                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache delete minikube-local-cache-test:functional-763073                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl images                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ cache          │ functional-763073 cache reload                                                                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ kubectl        │ functional-763073 kubectl -- --context functional-763073 get pods                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:29:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:29:36.794313  475694 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:29:36.794434  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794446  475694 out.go:374] Setting ErrFile to fd 2...
	I1216 04:29:36.794452  475694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:29:36.794700  475694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:29:36.795091  475694 out.go:368] Setting JSON to false
	I1216 04:29:36.795948  475694 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11523,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:29:36.796022  475694 start.go:143] virtualization:  
	I1216 04:29:36.799564  475694 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:29:36.803377  475694 notify.go:221] Checking for updates...
	I1216 04:29:36.806471  475694 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:29:36.809418  475694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:29:36.812382  475694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:36.815368  475694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:29:36.818384  475694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:29:36.821299  475694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:29:36.824780  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:36.824898  475694 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:29:36.853440  475694 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:29:36.853553  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.911081  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.901976085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.911198  475694 docker.go:319] overlay module found
	I1216 04:29:36.914378  475694 out.go:179] * Using the docker driver based on existing profile
	I1216 04:29:36.917157  475694 start.go:309] selected driver: docker
	I1216 04:29:36.917180  475694 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.917338  475694 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:29:36.917450  475694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:29:36.970986  475694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:29:36.961820507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:29:36.971442  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:36.971503  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:36.971553  475694 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:36.974751  475694 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:29:36.977516  475694 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:29:36.980431  475694 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:29:36.983493  475694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:29:36.983530  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:36.983585  475694 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:29:36.983595  475694 cache.go:65] Caching tarball of preloaded images
	I1216 04:29:36.983676  475694 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:29:36.983683  475694 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:29:36.983782  475694 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:29:37.009018  475694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:29:37.009047  475694 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:29:37.009096  475694 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:29:37.009136  475694 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:29:37.009247  475694 start.go:364] duration metric: took 82.708µs to acquireMachinesLock for "functional-763073"
	I1216 04:29:37.009287  475694 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:29:37.009293  475694 fix.go:54] fixHost starting: 
	I1216 04:29:37.009582  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:37.028726  475694 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:29:37.028764  475694 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:29:37.032201  475694 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:29:37.032251  475694 machine.go:94] provisionDockerMachine start ...
	I1216 04:29:37.032362  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.050328  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.050673  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.050689  475694 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:29:37.192783  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.192826  475694 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:29:37.192931  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.211313  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.211628  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.211639  475694 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:29:37.354192  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:29:37.354269  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.376898  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.377254  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.377278  475694 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:29:37.509279  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:29:37.509306  475694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:29:37.509326  475694 ubuntu.go:190] setting up certificates
	I1216 04:29:37.509346  475694 provision.go:84] configureAuth start
	I1216 04:29:37.509406  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:37.527206  475694 provision.go:143] copyHostCerts
	I1216 04:29:37.527264  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527308  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:29:37.527320  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:29:37.527395  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:29:37.527487  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527509  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:29:37.527517  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:29:37.527545  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:29:37.527594  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527615  475694 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:29:37.527622  475694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:29:37.527648  475694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:29:37.527699  475694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:29:37.800879  475694 provision.go:177] copyRemoteCerts
	I1216 04:29:37.800949  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:29:37.800990  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.823288  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:37.920869  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 04:29:37.920929  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:29:37.938521  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 04:29:37.938583  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:29:37.956377  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 04:29:37.956439  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:29:37.974119  475694 provision.go:87] duration metric: took 464.750518ms to configureAuth
	I1216 04:29:37.974148  475694 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:29:37.974331  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:37.974450  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:37.991914  475694 main.go:143] libmachine: Using SSH client type: native
	I1216 04:29:37.992233  475694 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:29:37.992254  475694 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:29:38.308392  475694 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:29:38.308467  475694 machine.go:97] duration metric: took 1.27620546s to provisionDockerMachine
	I1216 04:29:38.308501  475694 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:29:38.308543  475694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:29:38.308636  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:29:38.308736  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.327973  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.425975  475694 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:29:38.429465  475694 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:29:38.429486  475694 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:29:38.429491  475694 command_runner.go:130] > VERSION_ID="12"
	I1216 04:29:38.429495  475694 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:29:38.429500  475694 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:29:38.429503  475694 command_runner.go:130] > ID=debian
	I1216 04:29:38.429508  475694 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:29:38.429575  475694 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:29:38.429584  475694 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:29:38.429642  475694 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:29:38.429664  475694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:29:38.429675  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:29:38.429740  475694 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:29:38.429824  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:29:38.429840  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /etc/ssl/certs/4417272.pem
	I1216 04:29:38.429918  475694 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:29:38.429926  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> /etc/test/nested/copy/441727/hosts
	I1216 04:29:38.429973  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:29:38.438164  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:38.456472  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:29:38.474815  475694 start.go:296] duration metric: took 166.27897ms for postStartSetup
	I1216 04:29:38.474942  475694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:29:38.475008  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.493257  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.586186  475694 command_runner.go:130] > 13%
	I1216 04:29:38.586744  475694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:29:38.591214  475694 command_runner.go:130] > 169G
	I1216 04:29:38.591631  475694 fix.go:56] duration metric: took 1.582334669s for fixHost
	I1216 04:29:38.591655  475694 start.go:83] releasing machines lock for "functional-763073", held for 1.582392532s
	I1216 04:29:38.591756  475694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:29:38.610497  475694 ssh_runner.go:195] Run: cat /version.json
	I1216 04:29:38.610580  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.610804  475694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:29:38.610862  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:38.644780  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.648235  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:38.740654  475694 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765575274-22117", "minikube_version": "v1.37.0", "commit": "908107e58d7f489afb59ecef3679cbdc57b624cc"}
	I1216 04:29:38.740792  475694 ssh_runner.go:195] Run: systemctl --version
	I1216 04:29:38.835621  475694 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1216 04:29:38.838633  475694 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:29:38.838716  475694 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:29:38.838811  475694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:29:38.876422  475694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:29:38.880827  475694 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:29:38.881001  475694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:29:38.881102  475694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:29:38.888966  475694 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:29:38.888992  475694 start.go:496] detecting cgroup driver to use...
	I1216 04:29:38.889023  475694 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:29:38.889116  475694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:29:38.904919  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:29:38.918230  475694 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:29:38.918296  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:29:38.934386  475694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:29:38.947903  475694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:29:39.064725  475694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:29:39.186461  475694 docker.go:234] disabling docker service ...
	I1216 04:29:39.186555  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:29:39.201259  475694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:29:39.214213  475694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:29:39.331697  475694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:29:39.468929  475694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:29:39.481743  475694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:29:39.494008  475694 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1216 04:29:39.494807  475694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:29:39.494889  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.503668  475694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:29:39.503751  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.513027  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.521738  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.530476  475694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:29:39.538796  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.547730  475694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.556341  475694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:39.565046  475694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:29:39.571643  475694 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:29:39.572565  475694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:29:39.579896  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:39.695396  475694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:29:39.852818  475694 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:29:39.852930  475694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:29:39.856967  475694 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1216 04:29:39.856989  475694 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:29:39.856996  475694 command_runner.go:130] > Device: 0,72	Inode: 1641        Links: 1
	I1216 04:29:39.857013  475694 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:39.857019  475694 command_runner.go:130] > Access: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857028  475694 command_runner.go:130] > Modify: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857036  475694 command_runner.go:130] > Change: 2025-12-16 04:29:39.805035663 +0000
	I1216 04:29:39.857040  475694 command_runner.go:130] >  Birth: -
	I1216 04:29:39.857332  475694 start.go:564] Will wait 60s for crictl version
	I1216 04:29:39.857393  475694 ssh_runner.go:195] Run: which crictl
	I1216 04:29:39.860635  475694 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:29:39.860907  475694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:29:39.883882  475694 command_runner.go:130] > Version:  0.1.0
	I1216 04:29:39.883905  475694 command_runner.go:130] > RuntimeName:  cri-o
	I1216 04:29:39.883910  475694 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1216 04:29:39.883916  475694 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:29:39.886266  475694 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:29:39.886355  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.912976  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.913004  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.913011  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.913016  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.913021  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.913026  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.913030  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.913034  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.913044  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.913048  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.913052  475694 command_runner.go:130] >      static
	I1216 04:29:39.913055  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.913059  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.913089  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.913094  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.913097  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.913101  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.913104  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.913108  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.913112  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.915574  475694 ssh_runner.go:195] Run: crio --version
	I1216 04:29:39.945490  475694 command_runner.go:130] > crio version 1.34.3
	I1216 04:29:39.945513  475694 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1216 04:29:39.945520  475694 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1216 04:29:39.945525  475694 command_runner.go:130] >    GitTreeState:   dirty
	I1216 04:29:39.945530  475694 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1216 04:29:39.945534  475694 command_runner.go:130] >    GoVersion:      go1.24.6
	I1216 04:29:39.945538  475694 command_runner.go:130] >    Compiler:       gc
	I1216 04:29:39.945543  475694 command_runner.go:130] >    Platform:       linux/arm64
	I1216 04:29:39.945548  475694 command_runner.go:130] >    Linkmode:       static
	I1216 04:29:39.945551  475694 command_runner.go:130] >    BuildTags:
	I1216 04:29:39.945557  475694 command_runner.go:130] >      static
	I1216 04:29:39.945561  475694 command_runner.go:130] >      netgo
	I1216 04:29:39.945587  475694 command_runner.go:130] >      osusergo
	I1216 04:29:39.945594  475694 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1216 04:29:39.945598  475694 command_runner.go:130] >      seccomp
	I1216 04:29:39.945601  475694 command_runner.go:130] >      apparmor
	I1216 04:29:39.945607  475694 command_runner.go:130] >      selinux
	I1216 04:29:39.945617  475694 command_runner.go:130] >    LDFlags:          unknown
	I1216 04:29:39.945623  475694 command_runner.go:130] >    SeccompEnabled:   true
	I1216 04:29:39.945639  475694 command_runner.go:130] >    AppArmorEnabled:  false
	I1216 04:29:39.952832  475694 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:29:39.955738  475694 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:29:39.972578  475694 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:29:39.976813  475694 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1216 04:29:39.976940  475694 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:29:39.977085  475694 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:29:39.977157  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.026676  475694 command_runner.go:130] > {
	I1216 04:29:40.026700  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.026707  475694 command_runner.go:130] >     {
	I1216 04:29:40.026715  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.026721  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026727  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.026731  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026736  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026745  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.026758  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.026762  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026770  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.026775  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026789  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026796  475694 command_runner.go:130] >     },
	I1216 04:29:40.026800  475694 command_runner.go:130] >     {
	I1216 04:29:40.026807  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.026815  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026820  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.026827  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026831  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026843  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.026852  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.026859  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026863  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.026867  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.026879  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026883  475694 command_runner.go:130] >     },
	I1216 04:29:40.026895  475694 command_runner.go:130] >     {
	I1216 04:29:40.026906  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.026917  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.026927  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.026930  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026934  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.026942  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.026954  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.026962  475694 command_runner.go:130] >       ],
	I1216 04:29:40.026966  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.026974  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.026979  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.026985  475694 command_runner.go:130] >     },
	I1216 04:29:40.026988  475694 command_runner.go:130] >     {
	I1216 04:29:40.026995  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.027002  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027012  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.027019  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027023  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027031  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.027041  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.027047  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027052  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.027058  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027063  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027070  475694 command_runner.go:130] >       },
	I1216 04:29:40.027084  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027096  475694 command_runner.go:130] >     },
	I1216 04:29:40.027100  475694 command_runner.go:130] >     {
	I1216 04:29:40.027106  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.027114  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027119  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.027129  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027138  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027146  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.027157  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.027161  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027168  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.027171  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027175  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027183  475694 command_runner.go:130] >       },
	I1216 04:29:40.027187  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027192  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027200  475694 command_runner.go:130] >     },
	I1216 04:29:40.027203  475694 command_runner.go:130] >     {
	I1216 04:29:40.027214  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.027229  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027235  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.027241  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027245  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027254  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.027266  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.027269  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027278  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.027281  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027288  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027292  475694 command_runner.go:130] >       },
	I1216 04:29:40.027300  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027305  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027311  475694 command_runner.go:130] >     },
	I1216 04:29:40.027314  475694 command_runner.go:130] >     {
	I1216 04:29:40.027320  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.027324  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027333  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.027337  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027345  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027357  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.027366  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.027372  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027376  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.027384  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027389  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027395  475694 command_runner.go:130] >     },
	I1216 04:29:40.027399  475694 command_runner.go:130] >     {
	I1216 04:29:40.027405  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.027409  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027423  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.027430  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027434  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027442  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.027466  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.027473  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027478  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.027485  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027489  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.027492  475694 command_runner.go:130] >       },
	I1216 04:29:40.027498  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027507  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.027514  475694 command_runner.go:130] >     },
	I1216 04:29:40.027517  475694 command_runner.go:130] >     {
	I1216 04:29:40.027524  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.027531  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.027536  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.027542  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027547  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.027557  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.027568  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.027573  475694 command_runner.go:130] >       ],
	I1216 04:29:40.027586  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.027593  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.027598  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.027601  475694 command_runner.go:130] >       },
	I1216 04:29:40.027610  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.027614  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.027620  475694 command_runner.go:130] >     }
	I1216 04:29:40.027623  475694 command_runner.go:130] >   ]
	I1216 04:29:40.027626  475694 command_runner.go:130] > }
	I1216 04:29:40.029894  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.029927  475694 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:29:40.029987  475694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:29:40.055653  475694 command_runner.go:130] > {
	I1216 04:29:40.055673  475694 command_runner.go:130] >   "images":  [
	I1216 04:29:40.055678  475694 command_runner.go:130] >     {
	I1216 04:29:40.055687  475694 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1216 04:29:40.055692  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055697  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1216 04:29:40.055701  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055705  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055715  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1216 04:29:40.055724  475694 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1216 04:29:40.055728  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055732  475694 command_runner.go:130] >       "size":  "111333938",
	I1216 04:29:40.055736  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055740  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055744  475694 command_runner.go:130] >     },
	I1216 04:29:40.055747  475694 command_runner.go:130] >     {
	I1216 04:29:40.055753  475694 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1216 04:29:40.055757  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055762  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1216 04:29:40.055765  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055769  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055787  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1216 04:29:40.055795  475694 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1216 04:29:40.055798  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055802  475694 command_runner.go:130] >       "size":  "29037500",
	I1216 04:29:40.055806  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055817  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055820  475694 command_runner.go:130] >     },
	I1216 04:29:40.055824  475694 command_runner.go:130] >     {
	I1216 04:29:40.055830  475694 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1216 04:29:40.055833  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055838  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1216 04:29:40.055841  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055845  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055854  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1216 04:29:40.055862  475694 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1216 04:29:40.055865  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055869  475694 command_runner.go:130] >       "size":  "74491780",
	I1216 04:29:40.055873  475694 command_runner.go:130] >       "username":  "nonroot",
	I1216 04:29:40.055876  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055879  475694 command_runner.go:130] >     },
	I1216 04:29:40.055882  475694 command_runner.go:130] >     {
	I1216 04:29:40.055891  475694 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1216 04:29:40.055894  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055899  475694 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1216 04:29:40.055904  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055908  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055915  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1216 04:29:40.055923  475694 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1216 04:29:40.055926  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055929  475694 command_runner.go:130] >       "size":  "60857170",
	I1216 04:29:40.055933  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.055937  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.055940  475694 command_runner.go:130] >       },
	I1216 04:29:40.055952  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.055956  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.055959  475694 command_runner.go:130] >     },
	I1216 04:29:40.055961  475694 command_runner.go:130] >     {
	I1216 04:29:40.055968  475694 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1216 04:29:40.055971  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.055976  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1216 04:29:40.055979  475694 command_runner.go:130] >       ],
	I1216 04:29:40.055983  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.055990  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1216 04:29:40.055998  475694 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1216 04:29:40.056001  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056005  475694 command_runner.go:130] >       "size":  "84949999",
	I1216 04:29:40.056008  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056012  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056015  475694 command_runner.go:130] >       },
	I1216 04:29:40.056018  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056022  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056024  475694 command_runner.go:130] >     },
	I1216 04:29:40.056027  475694 command_runner.go:130] >     {
	I1216 04:29:40.056033  475694 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1216 04:29:40.056037  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056043  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1216 04:29:40.056045  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056049  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056057  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1216 04:29:40.056065  475694 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1216 04:29:40.056068  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056072  475694 command_runner.go:130] >       "size":  "72170325",
	I1216 04:29:40.056075  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056079  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056082  475694 command_runner.go:130] >       },
	I1216 04:29:40.056085  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056092  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056096  475694 command_runner.go:130] >     },
	I1216 04:29:40.056099  475694 command_runner.go:130] >     {
	I1216 04:29:40.056106  475694 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1216 04:29:40.056110  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056115  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1216 04:29:40.056118  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056122  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056130  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1216 04:29:40.056137  475694 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1216 04:29:40.056141  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056144  475694 command_runner.go:130] >       "size":  "74106775",
	I1216 04:29:40.056148  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056152  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056155  475694 command_runner.go:130] >     },
	I1216 04:29:40.056158  475694 command_runner.go:130] >     {
	I1216 04:29:40.056164  475694 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1216 04:29:40.056168  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056173  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1216 04:29:40.056176  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056180  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056188  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1216 04:29:40.056204  475694 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1216 04:29:40.056207  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056211  475694 command_runner.go:130] >       "size":  "49822549",
	I1216 04:29:40.056215  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056218  475694 command_runner.go:130] >         "value":  "0"
	I1216 04:29:40.056221  475694 command_runner.go:130] >       },
	I1216 04:29:40.056225  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056228  475694 command_runner.go:130] >       "pinned":  false
	I1216 04:29:40.056231  475694 command_runner.go:130] >     },
	I1216 04:29:40.056233  475694 command_runner.go:130] >     {
	I1216 04:29:40.056240  475694 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1216 04:29:40.056247  475694 command_runner.go:130] >       "repoTags":  [
	I1216 04:29:40.056251  475694 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.056255  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056259  475694 command_runner.go:130] >       "repoDigests":  [
	I1216 04:29:40.056266  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1216 04:29:40.056278  475694 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1216 04:29:40.056281  475694 command_runner.go:130] >       ],
	I1216 04:29:40.056285  475694 command_runner.go:130] >       "size":  "519884",
	I1216 04:29:40.056289  475694 command_runner.go:130] >       "uid":  {
	I1216 04:29:40.056293  475694 command_runner.go:130] >         "value":  "65535"
	I1216 04:29:40.056296  475694 command_runner.go:130] >       },
	I1216 04:29:40.056299  475694 command_runner.go:130] >       "username":  "",
	I1216 04:29:40.056303  475694 command_runner.go:130] >       "pinned":  true
	I1216 04:29:40.056305  475694 command_runner.go:130] >     }
	I1216 04:29:40.056308  475694 command_runner.go:130] >   ]
	I1216 04:29:40.056312  475694 command_runner.go:130] > }
	I1216 04:29:40.057842  475694 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:29:40.057866  475694 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:29:40.057874  475694 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:29:40.058028  475694 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:29:40.058117  475694 ssh_runner.go:195] Run: crio config
	I1216 04:29:40.108801  475694 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1216 04:29:40.108825  475694 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1216 04:29:40.108833  475694 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1216 04:29:40.108837  475694 command_runner.go:130] > #
	I1216 04:29:40.108844  475694 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1216 04:29:40.108850  475694 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1216 04:29:40.108857  475694 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1216 04:29:40.108874  475694 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1216 04:29:40.108891  475694 command_runner.go:130] > # reload'.
	I1216 04:29:40.108898  475694 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1216 04:29:40.108905  475694 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1216 04:29:40.108915  475694 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1216 04:29:40.108922  475694 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1216 04:29:40.108925  475694 command_runner.go:130] > [crio]
	I1216 04:29:40.108932  475694 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1216 04:29:40.108939  475694 command_runner.go:130] > # containers images, in this directory.
	I1216 04:29:40.109485  475694 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1216 04:29:40.109505  475694 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1216 04:29:40.110050  475694 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1216 04:29:40.110069  475694 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1216 04:29:40.110418  475694 command_runner.go:130] > # imagestore = ""
	I1216 04:29:40.110434  475694 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1216 04:29:40.110442  475694 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1216 04:29:40.110623  475694 command_runner.go:130] > # storage_driver = "overlay"
	I1216 04:29:40.110671  475694 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1216 04:29:40.110692  475694 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1216 04:29:40.110809  475694 command_runner.go:130] > # storage_option = [
	I1216 04:29:40.110816  475694 command_runner.go:130] > # ]
	I1216 04:29:40.110824  475694 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1216 04:29:40.110831  475694 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1216 04:29:40.110973  475694 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1216 04:29:40.110983  475694 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1216 04:29:40.111015  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1216 04:29:40.111021  475694 command_runner.go:130] > # always happen on a node reboot
	I1216 04:29:40.111194  475694 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1216 04:29:40.111214  475694 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1216 04:29:40.111221  475694 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1216 04:29:40.111260  475694 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1216 04:29:40.111402  475694 command_runner.go:130] > # version_file_persist = ""
	I1216 04:29:40.111414  475694 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1216 04:29:40.111423  475694 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1216 04:29:40.111428  475694 command_runner.go:130] > # internal_wipe = true
	I1216 04:29:40.111436  475694 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1216 04:29:40.111471  475694 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1216 04:29:40.111604  475694 command_runner.go:130] > # internal_repair = true
	I1216 04:29:40.111614  475694 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1216 04:29:40.111621  475694 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1216 04:29:40.111626  475694 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1216 04:29:40.111750  475694 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1216 04:29:40.111761  475694 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1216 04:29:40.111764  475694 command_runner.go:130] > [crio.api]
	I1216 04:29:40.111770  475694 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1216 04:29:40.111973  475694 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1216 04:29:40.111983  475694 command_runner.go:130] > # IP address on which the stream server will listen.
	I1216 04:29:40.112123  475694 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1216 04:29:40.112134  475694 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1216 04:29:40.112139  475694 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1216 04:29:40.112334  475694 command_runner.go:130] > # stream_port = "0"
	I1216 04:29:40.112344  475694 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1216 04:29:40.112496  475694 command_runner.go:130] > # stream_enable_tls = false
	I1216 04:29:40.112506  475694 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1216 04:29:40.112646  475694 command_runner.go:130] > # stream_idle_timeout = ""
	I1216 04:29:40.112658  475694 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1216 04:29:40.112664  475694 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112790  475694 command_runner.go:130] > # stream_tls_cert = ""
	I1216 04:29:40.112800  475694 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1216 04:29:40.112806  475694 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1216 04:29:40.112930  475694 command_runner.go:130] > # stream_tls_key = ""
	I1216 04:29:40.112940  475694 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1216 04:29:40.112947  475694 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1216 04:29:40.112956  475694 command_runner.go:130] > # automatically pick up the changes.
	I1216 04:29:40.113120  475694 command_runner.go:130] > # stream_tls_ca = ""
	I1216 04:29:40.113148  475694 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113407  475694 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1216 04:29:40.113455  475694 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1216 04:29:40.113595  475694 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1216 04:29:40.113624  475694 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1216 04:29:40.113657  475694 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1216 04:29:40.113680  475694 command_runner.go:130] > [crio.runtime]
	I1216 04:29:40.113702  475694 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1216 04:29:40.113736  475694 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1216 04:29:40.113757  475694 command_runner.go:130] > # "nofile=1024:2048"
	I1216 04:29:40.113777  475694 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1216 04:29:40.113795  475694 command_runner.go:130] > # default_ulimits = [
	I1216 04:29:40.113822  475694 command_runner.go:130] > # ]
	I1216 04:29:40.113845  475694 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1216 04:29:40.113998  475694 command_runner.go:130] > # no_pivot = false
	I1216 04:29:40.114026  475694 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1216 04:29:40.114058  475694 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1216 04:29:40.114076  475694 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1216 04:29:40.114109  475694 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1216 04:29:40.114138  475694 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1216 04:29:40.114159  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114189  475694 command_runner.go:130] > # conmon = ""
	I1216 04:29:40.114211  475694 command_runner.go:130] > # Cgroup setting for conmon
	I1216 04:29:40.114233  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1216 04:29:40.114382  475694 command_runner.go:130] > conmon_cgroup = "pod"
	I1216 04:29:40.114414  475694 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1216 04:29:40.114449  475694 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1216 04:29:40.114469  475694 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1216 04:29:40.114514  475694 command_runner.go:130] > # conmon_env = [
	I1216 04:29:40.114538  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114560  475694 command_runner.go:130] > # Additional environment variables to set for all the
	I1216 04:29:40.114591  475694 command_runner.go:130] > # containers. These are overridden if set in the
	I1216 04:29:40.114614  475694 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1216 04:29:40.114632  475694 command_runner.go:130] > # default_env = [
	I1216 04:29:40.114649  475694 command_runner.go:130] > # ]
	I1216 04:29:40.114679  475694 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1216 04:29:40.114706  475694 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1216 04:29:40.114884  475694 command_runner.go:130] > # selinux = false
	I1216 04:29:40.114896  475694 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1216 04:29:40.114903  475694 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1216 04:29:40.114909  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114913  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.114950  475694 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1216 04:29:40.114969  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.114984  475694 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1216 04:29:40.115020  475694 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1216 04:29:40.115046  475694 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1216 04:29:40.115055  475694 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1216 04:29:40.115062  475694 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1216 04:29:40.115067  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115072  475694 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1216 04:29:40.115077  475694 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1216 04:29:40.115116  475694 command_runner.go:130] > # the cgroup blockio controller.
	I1216 04:29:40.115133  475694 command_runner.go:130] > # blockio_config_file = ""
	I1216 04:29:40.115175  475694 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1216 04:29:40.115196  475694 command_runner.go:130] > # blockio parameters.
	I1216 04:29:40.115214  475694 command_runner.go:130] > # blockio_reload = false
	I1216 04:29:40.115235  475694 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1216 04:29:40.115262  475694 command_runner.go:130] > # irqbalance daemon.
	I1216 04:29:40.115417  475694 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1216 04:29:40.115505  475694 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1216 04:29:40.115615  475694 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1216 04:29:40.115655  475694 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1216 04:29:40.115678  475694 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1216 04:29:40.115698  475694 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1216 04:29:40.115716  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.115745  475694 command_runner.go:130] > # rdt_config_file = ""
	I1216 04:29:40.115769  475694 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1216 04:29:40.115788  475694 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1216 04:29:40.115822  475694 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1216 04:29:40.115844  475694 command_runner.go:130] > # separate_pull_cgroup = ""
	I1216 04:29:40.115864  475694 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1216 04:29:40.115884  475694 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1216 04:29:40.115919  475694 command_runner.go:130] > # will be added.
	I1216 04:29:40.115936  475694 command_runner.go:130] > # default_capabilities = [
	I1216 04:29:40.115952  475694 command_runner.go:130] > # 	"CHOWN",
	I1216 04:29:40.115983  475694 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1216 04:29:40.116006  475694 command_runner.go:130] > # 	"FSETID",
	I1216 04:29:40.116024  475694 command_runner.go:130] > # 	"FOWNER",
	I1216 04:29:40.116040  475694 command_runner.go:130] > # 	"SETGID",
	I1216 04:29:40.116070  475694 command_runner.go:130] > # 	"SETUID",
	I1216 04:29:40.116112  475694 command_runner.go:130] > # 	"SETPCAP",
	I1216 04:29:40.116150  475694 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1216 04:29:40.116170  475694 command_runner.go:130] > # 	"KILL",
	I1216 04:29:40.116187  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116209  475694 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1216 04:29:40.116243  475694 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1216 04:29:40.116264  475694 command_runner.go:130] > # add_inheritable_capabilities = false
	I1216 04:29:40.116284  475694 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1216 04:29:40.116316  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116336  475694 command_runner.go:130] > default_sysctls = [
	I1216 04:29:40.116352  475694 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1216 04:29:40.116370  475694 command_runner.go:130] > ]
	I1216 04:29:40.116402  475694 command_runner.go:130] > # List of devices on the host that a
	I1216 04:29:40.116430  475694 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1216 04:29:40.116449  475694 command_runner.go:130] > # allowed_devices = [
	I1216 04:29:40.116482  475694 command_runner.go:130] > # 	"/dev/fuse",
	I1216 04:29:40.116502  475694 command_runner.go:130] > # 	"/dev/net/tun",
	I1216 04:29:40.116519  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116549  475694 command_runner.go:130] > # List of additional devices. specified as
	I1216 04:29:40.116842  475694 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1216 04:29:40.116898  475694 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1216 04:29:40.116921  475694 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1216 04:29:40.116950  475694 command_runner.go:130] > # additional_devices = [
	I1216 04:29:40.116977  475694 command_runner.go:130] > # ]
	I1216 04:29:40.116996  475694 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1216 04:29:40.117028  475694 command_runner.go:130] > # cdi_spec_dirs = [
	I1216 04:29:40.117054  475694 command_runner.go:130] > # 	"/etc/cdi",
	I1216 04:29:40.117101  475694 command_runner.go:130] > # 	"/var/run/cdi",
	I1216 04:29:40.117118  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117139  475694 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1216 04:29:40.117174  475694 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1216 04:29:40.117193  475694 command_runner.go:130] > # Defaults to false.
	I1216 04:29:40.117222  475694 command_runner.go:130] > # device_ownership_from_security_context = false
	I1216 04:29:40.117264  475694 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1216 04:29:40.117284  475694 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1216 04:29:40.117301  475694 command_runner.go:130] > # hooks_dir = [
	I1216 04:29:40.117338  475694 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1216 04:29:40.117357  475694 command_runner.go:130] > # ]
	I1216 04:29:40.117377  475694 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1216 04:29:40.117412  475694 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1216 04:29:40.117421  475694 command_runner.go:130] > # its default mounts from the following two files:
	I1216 04:29:40.117425  475694 command_runner.go:130] > #
	I1216 04:29:40.117432  475694 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1216 04:29:40.117438  475694 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1216 04:29:40.117444  475694 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1216 04:29:40.117447  475694 command_runner.go:130] > #
	I1216 04:29:40.117454  475694 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1216 04:29:40.117461  475694 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1216 04:29:40.117467  475694 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1216 04:29:40.117517  475694 command_runner.go:130] > #      only add mounts it finds in this file.
	I1216 04:29:40.117534  475694 command_runner.go:130] > #
	I1216 04:29:40.117567  475694 command_runner.go:130] > # default_mounts_file = ""
	I1216 04:29:40.117599  475694 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1216 04:29:40.117644  475694 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1216 04:29:40.117670  475694 command_runner.go:130] > # pids_limit = -1
	I1216 04:29:40.117691  475694 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1216 04:29:40.117725  475694 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1216 04:29:40.117753  475694 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1216 04:29:40.117773  475694 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1216 04:29:40.117806  475694 command_runner.go:130] > # log_size_max = -1
	I1216 04:29:40.117830  475694 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1216 04:29:40.117850  475694 command_runner.go:130] > # log_to_journald = false
	I1216 04:29:40.117889  475694 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1216 04:29:40.117908  475694 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1216 04:29:40.117927  475694 command_runner.go:130] > # Path to directory for container attach sockets.
	I1216 04:29:40.117963  475694 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1216 04:29:40.117992  475694 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1216 04:29:40.118011  475694 command_runner.go:130] > # bind_mount_prefix = ""
	I1216 04:29:40.118045  475694 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1216 04:29:40.118064  475694 command_runner.go:130] > # read_only = false
	I1216 04:29:40.118085  475694 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1216 04:29:40.118118  475694 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1216 04:29:40.118145  475694 command_runner.go:130] > # live configuration reload.
	I1216 04:29:40.118163  475694 command_runner.go:130] > # log_level = "info"
	I1216 04:29:40.118200  475694 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1216 04:29:40.118229  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.118246  475694 command_runner.go:130] > # log_filter = ""
	I1216 04:29:40.118284  475694 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118305  475694 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1216 04:29:40.118324  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118360  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118379  475694 command_runner.go:130] > # uid_mappings = ""
	I1216 04:29:40.118400  475694 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1216 04:29:40.118433  475694 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1216 04:29:40.118453  475694 command_runner.go:130] > # separated by comma.
	I1216 04:29:40.118475  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118516  475694 command_runner.go:130] > # gid_mappings = ""
	I1216 04:29:40.118547  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1216 04:29:40.118581  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118608  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.118630  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.118663  475694 command_runner.go:130] > # minimum_mappable_uid = -1
	I1216 04:29:40.118694  475694 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1216 04:29:40.118716  475694 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1216 04:29:40.118867  475694 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1216 04:29:40.119059  475694 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1216 04:29:40.119080  475694 command_runner.go:130] > # minimum_mappable_gid = -1
	I1216 04:29:40.119119  475694 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1216 04:29:40.119149  475694 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1216 04:29:40.119169  475694 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1216 04:29:40.119206  475694 command_runner.go:130] > # ctr_stop_timeout = 30
	I1216 04:29:40.119228  475694 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1216 04:29:40.119249  475694 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1216 04:29:40.119286  475694 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1216 04:29:40.119304  475694 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1216 04:29:40.119323  475694 command_runner.go:130] > # drop_infra_ctr = true
	I1216 04:29:40.119357  475694 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1216 04:29:40.119378  475694 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1216 04:29:40.119425  475694 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1216 04:29:40.119453  475694 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1216 04:29:40.119476  475694 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1216 04:29:40.119511  475694 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1216 04:29:40.119541  475694 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1216 04:29:40.119560  475694 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1216 04:29:40.119590  475694 command_runner.go:130] > # shared_cpuset = ""
	I1216 04:29:40.119612  475694 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1216 04:29:40.119632  475694 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1216 04:29:40.119663  475694 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1216 04:29:40.119695  475694 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1216 04:29:40.119739  475694 command_runner.go:130] > # pinns_path = ""
	I1216 04:29:40.119766  475694 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1216 04:29:40.119787  475694 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1216 04:29:40.119820  475694 command_runner.go:130] > # enable_criu_support = true
	I1216 04:29:40.119849  475694 command_runner.go:130] > # Enable/disable the generation of the container,
	I1216 04:29:40.119870  475694 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1216 04:29:40.119901  475694 command_runner.go:130] > # enable_pod_events = false
	I1216 04:29:40.119923  475694 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1216 04:29:40.119945  475694 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1216 04:29:40.119977  475694 command_runner.go:130] > # default_runtime = "crun"
	I1216 04:29:40.120005  475694 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1216 04:29:40.120029  475694 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1216 04:29:40.120074  475694 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1216 04:29:40.120094  475694 command_runner.go:130] > # creation as a file is not desired either.
	I1216 04:29:40.120134  475694 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1216 04:29:40.120162  475694 command_runner.go:130] > # the hostname is being managed dynamically.
	I1216 04:29:40.120182  475694 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1216 04:29:40.120216  475694 command_runner.go:130] > # ]
	I1216 04:29:40.120248  475694 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1216 04:29:40.120270  475694 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1216 04:29:40.120320  475694 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1216 04:29:40.120347  475694 command_runner.go:130] > # Each entry in the table should follow the format:
	I1216 04:29:40.120396  475694 command_runner.go:130] > #
	I1216 04:29:40.120416  475694 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1216 04:29:40.120435  475694 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1216 04:29:40.120469  475694 command_runner.go:130] > # runtime_type = "oci"
	I1216 04:29:40.120490  475694 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1216 04:29:40.120514  475694 command_runner.go:130] > # inherit_default_runtime = false
	I1216 04:29:40.120552  475694 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1216 04:29:40.120570  475694 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1216 04:29:40.120589  475694 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1216 04:29:40.120618  475694 command_runner.go:130] > # monitor_env = []
	I1216 04:29:40.120639  475694 command_runner.go:130] > # privileged_without_host_devices = false
	I1216 04:29:40.120667  475694 command_runner.go:130] > # allowed_annotations = []
	I1216 04:29:40.120700  475694 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1216 04:29:40.120720  475694 command_runner.go:130] > # no_sync_log = false
	I1216 04:29:40.120739  475694 command_runner.go:130] > # default_annotations = {}
	I1216 04:29:40.120771  475694 command_runner.go:130] > # stream_websockets = false
	I1216 04:29:40.120795  475694 command_runner.go:130] > # seccomp_profile = ""
	I1216 04:29:40.120859  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.120892  475694 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1216 04:29:40.120926  475694 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1216 04:29:40.120956  475694 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1216 04:29:40.120976  475694 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1216 04:29:40.121008  475694 command_runner.go:130] > #   in $PATH.
	I1216 04:29:40.121038  475694 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1216 04:29:40.121057  475694 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1216 04:29:40.121115  475694 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1216 04:29:40.121133  475694 command_runner.go:130] > #   state.
	I1216 04:29:40.121155  475694 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1216 04:29:40.121189  475694 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1216 04:29:40.121228  475694 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1216 04:29:40.121250  475694 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1216 04:29:40.121270  475694 command_runner.go:130] > #   the values from the default runtime on load time.
	I1216 04:29:40.121300  475694 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1216 04:29:40.121328  475694 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1216 04:29:40.121349  475694 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1216 04:29:40.121370  475694 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1216 04:29:40.121404  475694 command_runner.go:130] > #   The currently recognized values are:
	I1216 04:29:40.121434  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1216 04:29:40.121457  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1216 04:29:40.121484  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1216 04:29:40.121518  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1216 04:29:40.121541  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1216 04:29:40.121564  475694 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1216 04:29:40.121592  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1216 04:29:40.121620  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1216 04:29:40.121640  475694 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1216 04:29:40.121671  475694 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1216 04:29:40.121692  475694 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1216 04:29:40.121712  475694 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1216 04:29:40.121747  475694 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1216 04:29:40.121775  475694 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1216 04:29:40.121796  475694 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1216 04:29:40.121818  475694 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1216 04:29:40.121849  475694 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1216 04:29:40.121873  475694 command_runner.go:130] > #   deprecated option "conmon".
	I1216 04:29:40.121896  475694 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1216 04:29:40.121916  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1216 04:29:40.121945  475694 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1216 04:29:40.121969  475694 command_runner.go:130] > #   should be moved to the container's cgroup
	I1216 04:29:40.121989  475694 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1216 04:29:40.122009  475694 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1216 04:29:40.122039  475694 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1216 04:29:40.122065  475694 command_runner.go:130] > #   conmon-rs by using:
	I1216 04:29:40.122085  475694 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1216 04:29:40.122108  475694 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1216 04:29:40.122138  475694 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1216 04:29:40.122166  475694 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1216 04:29:40.122184  475694 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1216 04:29:40.122204  475694 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1216 04:29:40.122236  475694 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1216 04:29:40.122262  475694 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1216 04:29:40.122285  475694 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1216 04:29:40.122332  475694 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1216 04:29:40.122360  475694 command_runner.go:130] > #   when a machine crash happens.
	I1216 04:29:40.122382  475694 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1216 04:29:40.122406  475694 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1216 04:29:40.122443  475694 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1216 04:29:40.122473  475694 command_runner.go:130] > #   seccomp profile for the runtime.
	I1216 04:29:40.122495  475694 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1216 04:29:40.122537  475694 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1216 04:29:40.122553  475694 command_runner.go:130] > #
	I1216 04:29:40.122572  475694 command_runner.go:130] > # Using the seccomp notifier feature:
	I1216 04:29:40.122589  475694 command_runner.go:130] > #
	I1216 04:29:40.122624  475694 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1216 04:29:40.122646  475694 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1216 04:29:40.122662  475694 command_runner.go:130] > #
	I1216 04:29:40.122693  475694 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1216 04:29:40.122721  475694 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1216 04:29:40.122737  475694 command_runner.go:130] > #
	I1216 04:29:40.122758  475694 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1216 04:29:40.122777  475694 command_runner.go:130] > # feature.
	I1216 04:29:40.122810  475694 command_runner.go:130] > #
	I1216 04:29:40.122842  475694 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1216 04:29:40.122863  475694 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1216 04:29:40.122893  475694 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1216 04:29:40.122913  475694 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1216 04:29:40.122933  475694 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1216 04:29:40.122960  475694 command_runner.go:130] > #
	I1216 04:29:40.122986  475694 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1216 04:29:40.123006  475694 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1216 04:29:40.123023  475694 command_runner.go:130] > #
	I1216 04:29:40.123043  475694 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1216 04:29:40.123079  475694 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1216 04:29:40.123096  475694 command_runner.go:130] > #
	I1216 04:29:40.123117  475694 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1216 04:29:40.123147  475694 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1216 04:29:40.123171  475694 command_runner.go:130] > # limitation.
	I1216 04:29:40.123187  475694 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1216 04:29:40.123204  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1216 04:29:40.123225  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123264  475694 command_runner.go:130] > runtime_root = "/run/crun"
	I1216 04:29:40.123284  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123302  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123331  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123357  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123375  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123394  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123413  475694 command_runner.go:130] > allowed_annotations = [
	I1216 04:29:40.123445  475694 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1216 04:29:40.123463  475694 command_runner.go:130] > ]
	I1216 04:29:40.123482  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123501  475694 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1216 04:29:40.123534  475694 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1216 04:29:40.123552  475694 command_runner.go:130] > runtime_type = ""
	I1216 04:29:40.123570  475694 command_runner.go:130] > runtime_root = "/run/runc"
	I1216 04:29:40.123589  475694 command_runner.go:130] > inherit_default_runtime = false
	I1216 04:29:40.123625  475694 command_runner.go:130] > runtime_config_path = ""
	I1216 04:29:40.123644  475694 command_runner.go:130] > container_min_memory = ""
	I1216 04:29:40.123670  475694 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1216 04:29:40.123707  475694 command_runner.go:130] > monitor_cgroup = "pod"
	I1216 04:29:40.123742  475694 command_runner.go:130] > monitor_exec_cgroup = ""
	I1216 04:29:40.123785  475694 command_runner.go:130] > privileged_without_host_devices = false
	I1216 04:29:40.123815  475694 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1216 04:29:40.123837  475694 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1216 04:29:40.123859  475694 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1216 04:29:40.123892  475694 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1216 04:29:40.123918  475694 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1216 04:29:40.123943  475694 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1216 04:29:40.123978  475694 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1216 04:29:40.123998  475694 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1216 04:29:40.124022  475694 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1216 04:29:40.124054  475694 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1216 04:29:40.124075  475694 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1216 04:29:40.124108  475694 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1216 04:29:40.124142  475694 command_runner.go:130] > # Example:
	I1216 04:29:40.124163  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1216 04:29:40.124183  475694 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1216 04:29:40.124217  475694 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1216 04:29:40.124245  475694 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1216 04:29:40.124262  475694 command_runner.go:130] > # cpuset = "0-1"
	I1216 04:29:40.124279  475694 command_runner.go:130] > # cpushares = "5"
	I1216 04:29:40.124296  475694 command_runner.go:130] > # cpuquota = "1000"
	I1216 04:29:40.124329  475694 command_runner.go:130] > # cpuperiod = "100000"
	I1216 04:29:40.124347  475694 command_runner.go:130] > # cpulimit = "35"
	I1216 04:29:40.124367  475694 command_runner.go:130] > # Where:
	I1216 04:29:40.124385  475694 command_runner.go:130] > # The workload name is workload-type.
	I1216 04:29:40.124421  475694 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1216 04:29:40.124440  475694 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1216 04:29:40.124460  475694 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1216 04:29:40.124492  475694 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1216 04:29:40.124517  475694 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1216 04:29:40.124536  475694 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1216 04:29:40.124556  475694 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1216 04:29:40.124575  475694 command_runner.go:130] > # Default value is set to true
	I1216 04:29:40.124610  475694 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1216 04:29:40.124630  475694 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1216 04:29:40.124649  475694 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1216 04:29:40.124667  475694 command_runner.go:130] > # Default value is set to 'false'
	I1216 04:29:40.124699  475694 command_runner.go:130] > # disable_hostport_mapping = false
	I1216 04:29:40.124718  475694 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1216 04:29:40.124741  475694 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1216 04:29:40.124768  475694 command_runner.go:130] > # timezone = ""
	I1216 04:29:40.124795  475694 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1216 04:29:40.124810  475694 command_runner.go:130] > #
	I1216 04:29:40.124829  475694 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1216 04:29:40.124850  475694 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1216 04:29:40.124892  475694 command_runner.go:130] > [crio.image]
	I1216 04:29:40.124912  475694 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1216 04:29:40.124930  475694 command_runner.go:130] > # default_transport = "docker://"
	I1216 04:29:40.124959  475694 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1216 04:29:40.125019  475694 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125026  475694 command_runner.go:130] > # global_auth_file = ""
	I1216 04:29:40.125031  475694 command_runner.go:130] > # The image used to instantiate infra containers.
	I1216 04:29:40.125036  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125041  475694 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1216 04:29:40.125093  475694 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1216 04:29:40.125106  475694 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1216 04:29:40.125111  475694 command_runner.go:130] > # This option supports live configuration reload.
	I1216 04:29:40.125121  475694 command_runner.go:130] > # pause_image_auth_file = ""
	I1216 04:29:40.125127  475694 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1216 04:29:40.125133  475694 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1216 04:29:40.125139  475694 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1216 04:29:40.125145  475694 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1216 04:29:40.125160  475694 command_runner.go:130] > # pause_command = "/pause"
	I1216 04:29:40.125167  475694 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1216 04:29:40.125172  475694 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1216 04:29:40.125178  475694 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1216 04:29:40.125184  475694 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1216 04:29:40.125190  475694 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1216 04:29:40.125198  475694 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1216 04:29:40.125209  475694 command_runner.go:130] > # pinned_images = [
	I1216 04:29:40.125213  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125219  475694 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1216 04:29:40.125226  475694 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1216 04:29:40.125232  475694 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1216 04:29:40.125238  475694 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1216 04:29:40.125243  475694 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1216 04:29:40.125248  475694 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1216 04:29:40.125253  475694 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1216 04:29:40.125268  475694 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1216 04:29:40.125275  475694 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1216 04:29:40.125281  475694 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1216 04:29:40.125287  475694 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1216 04:29:40.125291  475694 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1216 04:29:40.125298  475694 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1216 04:29:40.125304  475694 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1216 04:29:40.125308  475694 command_runner.go:130] > # changing them here.
	I1216 04:29:40.125313  475694 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1216 04:29:40.125317  475694 command_runner.go:130] > # insecure_registries = [
	I1216 04:29:40.125325  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125331  475694 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1216 04:29:40.125338  475694 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1216 04:29:40.125343  475694 command_runner.go:130] > # image_volumes = "mkdir"
	I1216 04:29:40.125348  475694 command_runner.go:130] > # Temporary directory to use for storing big files
	I1216 04:29:40.125352  475694 command_runner.go:130] > # big_files_temporary_dir = ""
	I1216 04:29:40.125358  475694 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1216 04:29:40.125365  475694 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1216 04:29:40.125369  475694 command_runner.go:130] > # auto_reload_registries = false
	I1216 04:29:40.125375  475694 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1216 04:29:40.125386  475694 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1216 04:29:40.125392  475694 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1216 04:29:40.125396  475694 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1216 04:29:40.125400  475694 command_runner.go:130] > # The mode of short name resolution.
	I1216 04:29:40.125406  475694 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1216 04:29:40.125414  475694 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1216 04:29:40.125419  475694 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1216 04:29:40.125422  475694 command_runner.go:130] > # short_name_mode = "enforcing"
	I1216 04:29:40.125428  475694 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1216 04:29:40.125435  475694 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1216 04:29:40.125439  475694 command_runner.go:130] > # oci_artifact_mount_support = true
	I1216 04:29:40.125445  475694 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1216 04:29:40.125449  475694 command_runner.go:130] > # CNI plugins.
	I1216 04:29:40.125456  475694 command_runner.go:130] > [crio.network]
	I1216 04:29:40.125462  475694 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1216 04:29:40.125467  475694 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1216 04:29:40.125471  475694 command_runner.go:130] > # cni_default_network = ""
	I1216 04:29:40.125476  475694 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1216 04:29:40.125481  475694 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1216 04:29:40.125487  475694 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1216 04:29:40.125498  475694 command_runner.go:130] > # plugin_dirs = [
	I1216 04:29:40.125501  475694 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1216 04:29:40.125504  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125508  475694 command_runner.go:130] > # List of included pod metrics.
	I1216 04:29:40.125512  475694 command_runner.go:130] > # included_pod_metrics = [
	I1216 04:29:40.125515  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125521  475694 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1216 04:29:40.125524  475694 command_runner.go:130] > [crio.metrics]
	I1216 04:29:40.125529  475694 command_runner.go:130] > # Globally enable or disable metrics support.
	I1216 04:29:40.125533  475694 command_runner.go:130] > # enable_metrics = false
	I1216 04:29:40.125537  475694 command_runner.go:130] > # Specify enabled metrics collectors.
	I1216 04:29:40.125542  475694 command_runner.go:130] > # Per default all metrics are enabled.
	I1216 04:29:40.125549  475694 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1216 04:29:40.125557  475694 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1216 04:29:40.125564  475694 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1216 04:29:40.125568  475694 command_runner.go:130] > # metrics_collectors = [
	I1216 04:29:40.125572  475694 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1216 04:29:40.125576  475694 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1216 04:29:40.125580  475694 command_runner.go:130] > # 	"containers_oom_total",
	I1216 04:29:40.125584  475694 command_runner.go:130] > # 	"processes_defunct",
	I1216 04:29:40.125587  475694 command_runner.go:130] > # 	"operations_total",
	I1216 04:29:40.125591  475694 command_runner.go:130] > # 	"operations_latency_seconds",
	I1216 04:29:40.125596  475694 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1216 04:29:40.125600  475694 command_runner.go:130] > # 	"operations_errors_total",
	I1216 04:29:40.125604  475694 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1216 04:29:40.125608  475694 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1216 04:29:40.125615  475694 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1216 04:29:40.125619  475694 command_runner.go:130] > # 	"image_pulls_success_total",
	I1216 04:29:40.125623  475694 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1216 04:29:40.125627  475694 command_runner.go:130] > # 	"containers_oom_count_total",
	I1216 04:29:40.125632  475694 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1216 04:29:40.125636  475694 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1216 04:29:40.125640  475694 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1216 04:29:40.125643  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125649  475694 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1216 04:29:40.125653  475694 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1216 04:29:40.125658  475694 command_runner.go:130] > # The port on which the metrics server will listen.
	I1216 04:29:40.125662  475694 command_runner.go:130] > # metrics_port = 9090
	I1216 04:29:40.125667  475694 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1216 04:29:40.125670  475694 command_runner.go:130] > # metrics_socket = ""
	I1216 04:29:40.125678  475694 command_runner.go:130] > # The certificate for the secure metrics server.
	I1216 04:29:40.125684  475694 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1216 04:29:40.125690  475694 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1216 04:29:40.125694  475694 command_runner.go:130] > # certificate on any modification event.
	I1216 04:29:40.125698  475694 command_runner.go:130] > # metrics_cert = ""
	I1216 04:29:40.125703  475694 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1216 04:29:40.125708  475694 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1216 04:29:40.125711  475694 command_runner.go:130] > # metrics_key = ""
	I1216 04:29:40.125718  475694 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1216 04:29:40.125721  475694 command_runner.go:130] > [crio.tracing]
	I1216 04:29:40.125726  475694 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1216 04:29:40.125730  475694 command_runner.go:130] > # enable_tracing = false
	I1216 04:29:40.125735  475694 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1216 04:29:40.125740  475694 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1216 04:29:40.125747  475694 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1216 04:29:40.125753  475694 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1216 04:29:40.125757  475694 command_runner.go:130] > # CRI-O NRI configuration.
	I1216 04:29:40.125760  475694 command_runner.go:130] > [crio.nri]
	I1216 04:29:40.125764  475694 command_runner.go:130] > # Globally enable or disable NRI.
	I1216 04:29:40.125772  475694 command_runner.go:130] > # enable_nri = true
	I1216 04:29:40.125776  475694 command_runner.go:130] > # NRI socket to listen on.
	I1216 04:29:40.125781  475694 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1216 04:29:40.125785  475694 command_runner.go:130] > # NRI plugin directory to use.
	I1216 04:29:40.125789  475694 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1216 04:29:40.125794  475694 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1216 04:29:40.125799  475694 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1216 04:29:40.125804  475694 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1216 04:29:40.125861  475694 command_runner.go:130] > # nri_disable_connections = false
	I1216 04:29:40.125867  475694 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1216 04:29:40.125871  475694 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1216 04:29:40.125876  475694 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1216 04:29:40.125881  475694 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1216 04:29:40.125885  475694 command_runner.go:130] > # NRI default validator configuration.
	I1216 04:29:40.125892  475694 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1216 04:29:40.125898  475694 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1216 04:29:40.125902  475694 command_runner.go:130] > # can be restricted/rejected:
	I1216 04:29:40.125905  475694 command_runner.go:130] > # - OCI hook injection
	I1216 04:29:40.125910  475694 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1216 04:29:40.125915  475694 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1216 04:29:40.125919  475694 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1216 04:29:40.125923  475694 command_runner.go:130] > # - adjustment of linux namespaces
	I1216 04:29:40.125929  475694 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1216 04:29:40.125936  475694 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1216 04:29:40.125941  475694 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1216 04:29:40.125944  475694 command_runner.go:130] > #
	I1216 04:29:40.125948  475694 command_runner.go:130] > # [crio.nri.default_validator]
	I1216 04:29:40.125953  475694 command_runner.go:130] > # nri_enable_default_validator = false
	I1216 04:29:40.125958  475694 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1216 04:29:40.125963  475694 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1216 04:29:40.125969  475694 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1216 04:29:40.125974  475694 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1216 04:29:40.125979  475694 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1216 04:29:40.125986  475694 command_runner.go:130] > # nri_validator_required_plugins = [
	I1216 04:29:40.125991  475694 command_runner.go:130] > # ]
	I1216 04:29:40.125996  475694 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1216 04:29:40.126002  475694 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1216 04:29:40.126007  475694 command_runner.go:130] > [crio.stats]
	I1216 04:29:40.126013  475694 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1216 04:29:40.126018  475694 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1216 04:29:40.126022  475694 command_runner.go:130] > # stats_collection_period = 0
	I1216 04:29:40.126028  475694 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1216 04:29:40.126034  475694 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1216 04:29:40.126038  475694 command_runner.go:130] > # collection_period = 0
	I1216 04:29:40.126084  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086834829Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1216 04:29:40.126093  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086875912Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1216 04:29:40.126103  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086913837Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1216 04:29:40.126111  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.086943031Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1216 04:29:40.126123  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087027733Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:29:40.126132  475694 command_runner.go:130] ! time="2025-12-16T04:29:40.087362399Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1216 04:29:40.126142  475694 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1216 04:29:40.126226  475694 cni.go:84] Creating CNI manager for ""
	I1216 04:29:40.126235  475694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:29:40.126255  475694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:29:40.126277  475694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:29:40.126422  475694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:29:40.126497  475694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:29:40.134815  475694 command_runner.go:130] > kubeadm
	I1216 04:29:40.134839  475694 command_runner.go:130] > kubectl
	I1216 04:29:40.134844  475694 command_runner.go:130] > kubelet
	I1216 04:29:40.134872  475694 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:29:40.134932  475694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:29:40.143529  475694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:29:40.156375  475694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:29:40.169188  475694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1216 04:29:40.182223  475694 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:29:40.185968  475694 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:29:40.186105  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:40.327743  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:41.068736  475694 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:29:41.068757  475694 certs.go:195] generating shared ca certs ...
	I1216 04:29:41.068779  475694 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.069050  475694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:29:41.069145  475694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:29:41.069172  475694 certs.go:257] generating profile certs ...
	I1216 04:29:41.069366  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:29:41.069439  475694 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:29:41.069492  475694 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:29:41.069508  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:29:41.069527  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:29:41.069550  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:29:41.069568  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:29:41.069598  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:29:41.069624  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:29:41.069636  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:29:41.069661  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:29:41.069722  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:29:41.069792  475694 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:29:41.069804  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:29:41.069832  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:29:41.069864  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:29:41.069933  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:29:41.070011  475694 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:29:41.070050  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.070068  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.070082  475694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem -> /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.070740  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:29:41.088516  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:29:41.106273  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:29:41.124169  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:29:41.142346  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:29:41.160632  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:29:41.181690  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:29:41.199949  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:29:41.217789  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:29:41.237601  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:29:41.255073  475694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:29:41.272738  475694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:29:41.286149  475694 ssh_runner.go:195] Run: openssl version
	I1216 04:29:41.292023  475694 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:29:41.292477  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.299852  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:29:41.307795  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312150  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312182  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.312250  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:29:41.353168  475694 command_runner.go:130] > 3ec20f2e
	I1216 04:29:41.353674  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:29:41.362516  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.370150  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:29:41.377841  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381956  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.381986  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.382040  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:29:41.422880  475694 command_runner.go:130] > b5213941
	I1216 04:29:41.423347  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:29:41.430980  475694 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.438640  475694 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:29:41.446570  475694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450618  475694 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450691  475694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.450770  475694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:29:41.493534  475694 command_runner.go:130] > 51391683
	I1216 04:29:41.494044  475694 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:29:41.501730  475694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505651  475694 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:29:41.505723  475694 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:29:41.505736  475694 command_runner.go:130] > Device: 259,1	Inode: 1313043     Links: 1
	I1216 04:29:41.505744  475694 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:29:41.505751  475694 command_runner.go:130] > Access: 2025-12-16 04:25:32.918538317 +0000
	I1216 04:29:41.505756  475694 command_runner.go:130] > Modify: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505760  475694 command_runner.go:130] > Change: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505765  475694 command_runner.go:130] >  Birth: 2025-12-16 04:21:27.832077118 +0000
	I1216 04:29:41.505860  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:29:41.547026  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.547554  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:29:41.588926  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.589431  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:29:41.630503  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.630976  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:29:41.679374  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.679872  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:29:41.720872  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.720962  475694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:29:41.763843  475694 command_runner.go:130] > Certificate will not expire
	I1216 04:29:41.764306  475694 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:29:41.764397  475694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:29:41.764473  475694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:29:41.794813  475694 cri.go:89] found id: ""
	I1216 04:29:41.795018  475694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:29:41.802238  475694 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:29:41.802260  475694 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:29:41.802267  475694 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:29:41.803148  475694 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:29:41.803169  475694 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:29:41.803241  475694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:29:41.810442  475694 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:29:41.810892  475694 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-763073" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811005  475694 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-438353/kubeconfig needs updating (will repair): [kubeconfig missing "functional-763073" cluster setting kubeconfig missing "functional-763073" context setting]
	I1216 04:29:41.811272  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.811696  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.811844  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.812430  475694 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:29:41.812449  475694 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:29:41.812455  475694 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:29:41.812459  475694 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:29:41.812464  475694 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:29:41.812504  475694 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:29:41.812753  475694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:29:41.827245  475694 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 04:29:41.827324  475694 kubeadm.go:602] duration metric: took 24.148626ms to restartPrimaryControlPlane
	I1216 04:29:41.827348  475694 kubeadm.go:403] duration metric: took 63.050551ms to StartCluster
	I1216 04:29:41.827392  475694 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.827497  475694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.828225  475694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:29:41.828522  475694 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:29:41.828868  475694 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:29:41.828926  475694 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:29:41.829003  475694 addons.go:70] Setting storage-provisioner=true in profile "functional-763073"
	I1216 04:29:41.829025  475694 addons.go:239] Setting addon storage-provisioner=true in "functional-763073"
	I1216 04:29:41.829051  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.829717  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.829866  475694 addons.go:70] Setting default-storageclass=true in profile "functional-763073"
	I1216 04:29:41.829889  475694 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-763073"
	I1216 04:29:41.830179  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.835425  475694 out.go:179] * Verifying Kubernetes components...
	I1216 04:29:41.843204  475694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:29:41.852282  475694 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:29:41.852487  475694 kapi.go:59] client config for functional-763073: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:29:41.852847  475694 addons.go:239] Setting addon default-storageclass=true in "functional-763073"
	I1216 04:29:41.852883  475694 host.go:66] Checking if "functional-763073" exists ...
	I1216 04:29:41.853441  475694 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:29:41.902066  475694 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:29:41.905129  475694 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:41.905181  475694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:29:41.905276  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.908977  475694 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:41.909002  475694 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:29:41.909132  475694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:29:41.960105  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:41.975058  475694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:29:42.043859  475694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:29:42.092471  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:42.106008  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:42.818195  475694 node_ready.go:35] waiting up to 6m0s for node "functional-763073" to be "Ready" ...
	I1216 04:29:42.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:29:42.818432  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:42.818659  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818682  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818701  475694 retry.go:31] will retry after 327.643243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818740  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:42.818752  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818759  475694 retry.go:31] will retry after 171.339125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:42.818814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:42.990327  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.052462  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.052555  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.052597  475694 retry.go:31] will retry after 320.089446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.146742  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.207665  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.212209  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.212243  475694 retry.go:31] will retry after 291.464307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.318472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.318814  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.373308  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:43.435189  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.435254  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.435280  475694 retry.go:31] will retry after 781.758867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.504448  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.571334  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.571371  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.571390  475694 retry.go:31] will retry after 332.937553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.818906  475694 type.go:168] "Request Body" body=""
	I1216 04:29:43.818991  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:43.819297  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:43.904706  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:43.962384  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:43.966307  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:43.966396  475694 retry.go:31] will retry after 1.136896719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.217759  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:44.279618  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:44.283381  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.283415  475694 retry.go:31] will retry after 1.1051557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:44.318552  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.318673  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.319015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:44.818498  475694 type.go:168] "Request Body" body=""
	I1216 04:29:44.818571  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:44.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:44.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:45.103534  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:45.194787  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.195010  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.195099  475694 retry.go:31] will retry after 1.211699823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.319146  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.319235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.319562  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:45.388763  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:45.456804  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:45.456849  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.456877  475694 retry.go:31] will retry after 720.865488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:45.819295  475694 type.go:168] "Request Body" body=""
	I1216 04:29:45.819381  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:45.819670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.178239  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:46.241684  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.241730  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.241750  475694 retry.go:31] will retry after 2.398929444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.318930  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.319008  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.319303  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:46.407630  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:46.476894  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:46.476941  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.476959  475694 retry.go:31] will retry after 1.300502308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:46.818702  475694 type.go:168] "Request Body" body=""
	I1216 04:29:46.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:46.819124  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:46.819187  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:47.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.318594  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.778651  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:47.819040  475694 type.go:168] "Request Body" body=""
	I1216 04:29:47.819112  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:47.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:47.836852  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:47.840282  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:47.840312  475694 retry.go:31] will retry after 3.994114703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.318482  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.318555  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:48.641498  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:48.705855  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:48.705903  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.705923  475694 retry.go:31] will retry after 1.757515206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:48.819100  475694 type.go:168] "Request Body" body=""
	I1216 04:29:48.819185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:48.819457  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:48.819514  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:49.319285  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.319697  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:49.819385  475694 type.go:168] "Request Body" body=""
	I1216 04:29:49.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:49.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.318415  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.318509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:50.464331  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:50.523255  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:50.523310  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.523330  475694 retry.go:31] will retry after 5.029530817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:50.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:29:50.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:50.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.318457  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:51.318895  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:51.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:29:51.819120  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:51.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:51.834846  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:51.906733  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:51.906789  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:51.906807  475694 retry.go:31] will retry after 4.132534587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:52.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.319782  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:52.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:29:52.818481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:52.818820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.318781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:53.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:29:53.818436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:53.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:53.818768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:54.318470  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.318855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:54.818416  475694 type.go:168] "Request Body" body=""
	I1216 04:29:54.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:54.818791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.318563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.318906  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:55.553265  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:29:55.626702  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:55.630832  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.630867  475694 retry.go:31] will retry after 7.132223529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:55.819263  475694 type.go:168] "Request Body" body=""
	I1216 04:29:55.819349  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:55.819703  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:55.819756  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:56.040181  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:29:56.104678  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:29:56.104716  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.104735  475694 retry.go:31] will retry after 8.857583825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:29:56.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.319119  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.319453  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:56.819390  475694 type.go:168] "Request Body" body=""
	I1216 04:29:56.819466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:56.819757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.319466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:57.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:29:57.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:57.818722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:58.319396  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.319513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.319927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:29:58.319980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:29:58.818648  475694 type.go:168] "Request Body" body=""
	I1216 04:29:58.818727  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:58.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.318403  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:29:59.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:29:59.818568  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:29:59.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.318660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:00.818779  475694 type.go:168] "Request Body" body=""
	I1216 04:30:00.818900  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:00.819255  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:00.819314  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:01.318812  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.318904  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.319269  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:01.818988  475694 type.go:168] "Request Body" body=""
	I1216 04:30:01.819066  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:01.819335  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.319195  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.319286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.763349  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:02.818891  475694 type.go:168] "Request Body" body=""
	I1216 04:30:02.818969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:02.819274  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:02.830785  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:02.830835  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:02.830855  475694 retry.go:31] will retry after 11.115111011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:03.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.318492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:03.318795  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:03.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:30:03.818567  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:03.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.318356  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.318791  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.819354  475694 type.go:168] "Request Body" body=""
	I1216 04:30:04.819425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:04.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:04.963132  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:05.030528  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:05.030573  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.030594  475694 retry.go:31] will retry after 13.807129774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:05.319025  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.319109  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.319430  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:05.319487  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:05.819077  475694 type.go:168] "Request Body" body=""
	I1216 04:30:05.819160  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:05.819454  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.319216  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.319298  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.319561  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:06.818566  475694 type.go:168] "Request Body" body=""
	I1216 04:30:06.818640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:06.818960  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.319006  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.319080  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.319410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:07.819153  475694 type.go:168] "Request Body" body=""
	I1216 04:30:07.819235  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:07.819526  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:07.819580  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:08.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.319439  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.319857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:08.818460  475694 type.go:168] "Request Body" body=""
	I1216 04:30:08.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:08.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.318512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.318769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:09.818489  475694 type.go:168] "Request Body" body=""
	I1216 04:30:09.818572  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:09.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:10.318546  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.319011  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:10.319072  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:10.818626  475694 type.go:168] "Request Body" body=""
	I1216 04:30:10.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:10.819016  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:11.818916  475694 type.go:168] "Request Body" body=""
	I1216 04:30:11.818993  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:11.819322  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:12.319122  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.319197  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.319465  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:12.319515  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:12.819218  475694 type.go:168] "Request Body" body=""
	I1216 04:30:12.819289  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:12.819619  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.318424  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.318745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:30:13.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:13.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:13.946231  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:14.010550  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:14.014827  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.014869  475694 retry.go:31] will retry after 8.112010712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:14.319336  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.319410  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.319731  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:14.319784  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:14.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:30:14.818426  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:14.818781  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.319376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.319444  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.319700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:15.818487  475694 type.go:168] "Request Body" body=""
	I1216 04:30:15.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:15.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:16.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.319765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:16.319823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:16.818746  475694 type.go:168] "Request Body" body=""
	I1216 04:30:16.818828  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:16.819089  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.318878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:17.818576  475694 type.go:168] "Request Body" body=""
	I1216 04:30:17.818652  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:17.818985  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.318670  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.319008  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:18.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:30:18.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:18.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:18.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:18.838055  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:18.893739  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:18.897596  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:18.897631  475694 retry.go:31] will retry after 11.366080685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:19.319301  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.319380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.319681  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:19.819376  475694 type.go:168] "Request Body" body=""
	I1216 04:30:19.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:19.819724  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:20.818403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:20.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:20.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:21.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:21.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:21.818866  475694 type.go:168] "Request Body" body=""
	I1216 04:30:21.818958  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:21.819324  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.127748  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:22.189082  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:22.189129  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.189148  475694 retry.go:31] will retry after 27.844564007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:22.319363  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.319433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:22.818358  475694 type.go:168] "Request Body" body=""
	I1216 04:30:22.818435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:22.818698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:23.319415  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.319492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.319809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:23.319865  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:23.818531  475694 type.go:168] "Request Body" body=""
	I1216 04:30:23.818610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:23.818962  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.318495  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.318564  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.318816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:24.818435  475694 type.go:168] "Request Body" body=""
	I1216 04:30:24.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:24.818856  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.318545  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.318628  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.318920  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:25.818420  475694 type.go:168] "Request Body" body=""
	I1216 04:30:25.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:25.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:25.818900  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:26.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.318905  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:26.818764  475694 type.go:168] "Request Body" body=""
	I1216 04:30:26.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:26.819183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.318950  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.319026  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.319288  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:27.819187  475694 type.go:168] "Request Body" body=""
	I1216 04:30:27.819262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:27.819610  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:27.819670  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:28.319414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.319507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.319802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:28.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:28.818505  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:28.818767  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.318476  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.318919  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:29.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:30:29.818707  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:29.819030  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:30.264789  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:30.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:30.329449  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:30.329484  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.329503  475694 retry.go:31] will retry after 18.349811318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:30.819293  475694 type.go:168] "Request Body" body=""
	I1216 04:30:30.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:30.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.318473  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.318550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.318884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:31.818872  475694 type.go:168] "Request Body" body=""
	I1216 04:30:31.818940  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:31.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:32.319072  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.319152  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.319497  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:32.319550  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:32.819264  475694 type.go:168] "Request Body" body=""
	I1216 04:30:32.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:32.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.319325  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.319391  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.319698  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:33.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:33.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:33.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.318569  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.318644  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.318965  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:34.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:30:34.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:34.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:34.819051  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:35.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:35.818450  475694 type.go:168] "Request Body" body=""
	I1216 04:30:35.818528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:35.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.318610  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.318679  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.318948  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:36.818786  475694 type.go:168] "Request Body" body=""
	I1216 04:30:36.818871  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:36.819206  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:36.819259  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:37.318997  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.319078  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.319374  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:37.819133  475694 type.go:168] "Request Body" body=""
	I1216 04:30:37.819207  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:37.819482  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.319323  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.319397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.319736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:38.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:30:38.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:38.818843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:39.318407  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.318474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.318729  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:39.318768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:39.818457  475694 type.go:168] "Request Body" body=""
	I1216 04:30:39.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:39.818884  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.318693  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.319014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:40.818414  475694 type.go:168] "Request Body" body=""
	I1216 04:30:40.818482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:40.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:41.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.318542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:41.318917  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:41.819023  475694 type.go:168] "Request Body" body=""
	I1216 04:30:41.819096  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:41.819434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.319088  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.319177  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.319455  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:42.819310  475694 type.go:168] "Request Body" body=""
	I1216 04:30:42.819387  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:42.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.318452  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:43.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:43.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:43.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:43.818851  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:44.318448  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.318869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:44.818424  475694 type.go:168] "Request Body" body=""
	I1216 04:30:44.818501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:44.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.318828  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.318911  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.319336  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:45.819228  475694 type.go:168] "Request Body" body=""
	I1216 04:30:45.819306  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:45.819658  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:45.819718  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:46.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:46.818660  475694 type.go:168] "Request Body" body=""
	I1216 04:30:46.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:46.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.318699  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.318774  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.319086  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:47.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:30:47.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:47.818830  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:48.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:48.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:48.679520  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:30:48.741510  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:48.741587  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.741616  475694 retry.go:31] will retry after 29.090794722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:48.818706  475694 type.go:168] "Request Body" body=""
	I1216 04:30:48.818780  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:48.819102  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.318469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.318810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:49.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:30:49.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:49.818809  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:50.034416  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:30:50.096674  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:30:50.100468  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.100502  475694 retry.go:31] will retry after 39.426681546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:30:50.318852  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.318933  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.319214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:50.319264  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:50.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:30:50.819159  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:50.819546  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.319385  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.319643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:51.818732  475694 type.go:168] "Request Body" body=""
	I1216 04:30:51.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:51.819127  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.318819  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.318894  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.319218  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:52.818982  475694 type.go:168] "Request Body" body=""
	I1216 04:30:52.819057  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:52.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:52.819370  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:53.319110  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.319188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.319511  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:53.819108  475694 type.go:168] "Request Body" body=""
	I1216 04:30:53.819188  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:53.819533  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.319331  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.319714  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:54.818392  475694 type.go:168] "Request Body" body=""
	I1216 04:30:54.818470  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:54.818795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:55.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:55.318874  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:55.818422  475694 type.go:168] "Request Body" body=""
	I1216 04:30:55.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:55.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.318840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:56.818691  475694 type.go:168] "Request Body" body=""
	I1216 04:30:56.818767  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:56.819103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.318465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:57.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:30:57.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:57.819813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:30:57.819868  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:30:58.318364  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.318440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:58.819413  475694 type.go:168] "Request Body" body=""
	I1216 04:30:58.819488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:58.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.318514  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:30:59.818497  475694 type.go:168] "Request Body" body=""
	I1216 04:30:59.818583  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:30:59.818942  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:00.327892  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.327986  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.328316  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:00.328364  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:00.819096  475694 type.go:168] "Request Body" body=""
	I1216 04:31:00.819170  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:00.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.319773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:01.818911  475694 type.go:168] "Request Body" body=""
	I1216 04:31:01.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:01.819294  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.319036  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.319118  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.319418  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:02.819093  475694 type.go:168] "Request Body" body=""
	I1216 04:31:02.819166  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:02.819505  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:02.819563  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:03.319107  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.319185  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.319442  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:03.819184  475694 type.go:168] "Request Body" body=""
	I1216 04:31:03.819264  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:03.819590  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.319286  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.319362  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.319688  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:04.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:04.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:04.818746  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:05.318450  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.318528  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.318837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:05.318887  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:05.818417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:05.818534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:05.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.318784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:06.818697  475694 type.go:168] "Request Body" body=""
	I1216 04:31:06.818768  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:06.819055  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:07.319228  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.319300  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.319611  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:07.319663  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:07.819403  475694 type.go:168] "Request Body" body=""
	I1216 04:31:07.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:07.819795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.318858  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:08.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:08.818509  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:08.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.318534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.318615  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:09.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:31:09.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:09.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:09.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:10.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:10.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:31:10.818634  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:10.818898  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:11.818867  475694 type.go:168] "Request Body" body=""
	I1216 04:31:11.818943  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:11.819292  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:11.819345  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:12.319080  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.319411  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:12.819163  475694 type.go:168] "Request Body" body=""
	I1216 04:31:12.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:12.819597  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.319410  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.319484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.319823  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:13.818534  475694 type.go:168] "Request Body" body=""
	I1216 04:31:13.818607  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:13.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:14.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.318819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:14.318867  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:14.818523  475694 type.go:168] "Request Body" body=""
	I1216 04:31:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:14.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.318504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:15.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:15.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:15.818863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.318822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:16.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:31:16.818718  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:16.818992  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:16.819042  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:17.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.318460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.818446  475694 type.go:168] "Request Body" body=""
	I1216 04:31:17.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:17.818831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:17.833208  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:31:17.902395  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906323  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:17.906439  475694 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:18.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:18.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:31:18.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:18.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:19.318591  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.319009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:19.319064  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:19.818355  475694 type.go:168] "Request Body" body=""
	I1216 04:31:19.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:19.818687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.319419  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.319793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:20.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:31:20.818547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.318502  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.318570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.318820  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:21.819058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:21.819153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:21.819506  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:21.819565  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:22.319388  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.319835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:22.819358  475694 type.go:168] "Request Body" body=""
	I1216 04:31:22.819430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:22.819732  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.318411  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:23.818398  475694 type.go:168] "Request Body" body=""
	I1216 04:31:23.818473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:23.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:24.318424  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.318789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:24.318840  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:24.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:31:24.818448  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:24.818741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:25.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:31:25.818645  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:25.818926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.318381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.318457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.318786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:26.818755  475694 type.go:168] "Request Body" body=""
	I1216 04:31:26.818868  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:26.819189  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:26.819243  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:27.318982  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.319054  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.319361  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:27.819127  475694 type.go:168] "Request Body" body=""
	I1216 04:31:27.819233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:27.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.319236  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:28.819411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:28.819489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:28.819745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:28.819786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:29.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:29.528240  475694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:31:29.598877  475694 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598918  475694 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:31:29.598995  475694 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:31:29.602136  475694 out.go:179] * Enabled addons: 
	I1216 04:31:29.604114  475694 addons.go:530] duration metric: took 1m47.775177414s for enable addons: enabled=[]
	I1216 04:31:29.818770  475694 type.go:168] "Request Body" body=""
	I1216 04:31:29.818886  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:29.819272  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.319147  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.319404  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:30.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:30.819315  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:30.819674  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:31.318340  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.318743  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:31.318800  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:31.818902  475694 type.go:168] "Request Body" body=""
	I1216 04:31:31.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:31.819227  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.319058  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.319135  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.319508  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:32.819330  475694 type.go:168] "Request Body" body=""
	I1216 04:31:32.819408  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:32.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:33.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.318501  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:33.318863  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:33.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:33.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:33.818785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.318363  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.318438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.318790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:34.819369  475694 type.go:168] "Request Body" body=""
	I1216 04:31:34.819438  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:34.819713  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:35.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.318872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:35.318943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:35.818615  475694 type.go:168] "Request Body" body=""
	I1216 04:31:35.818692  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:35.819009  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:36.818925  475694 type.go:168] "Request Body" body=""
	I1216 04:31:36.819003  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:36.819578  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:37.319361  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.319459  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.319790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:37.319835  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:37.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:31:37.818525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:37.818876  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:38.818429  475694 type.go:168] "Request Body" body=""
	I1216 04:31:38.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:38.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.318529  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.318609  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.318895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:39.818381  475694 type.go:168] "Request Body" body=""
	I1216 04:31:39.818456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:39.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:39.818858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:40.318433  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.318507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.318811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:40.818376  475694 type.go:168] "Request Body" body=""
	I1216 04:31:40.818450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:40.818707  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.318824  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.319203  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:41.819213  475694 type.go:168] "Request Body" body=""
	I1216 04:31:41.819296  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:41.819635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:41.819695  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:42.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.319800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:42.818819  475694 type.go:168] "Request Body" body=""
	I1216 04:31:42.818916  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:42.819270  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.319056  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.319132  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.319459  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:43.819240  475694 type.go:168] "Request Body" body=""
	I1216 04:31:43.819310  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:43.819650  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:44.319420  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.319840  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:44.319896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:44.818558  475694 type.go:168] "Request Body" body=""
	I1216 04:31:44.818637  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:44.818980  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.318674  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.318748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.319042  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:45.818436  475694 type.go:168] "Request Body" body=""
	I1216 04:31:45.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:45.818872  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.318525  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:46.818761  475694 type.go:168] "Request Body" body=""
	I1216 04:31:46.818837  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:46.819095  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:46.819145  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:47.318441  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.318515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.318857  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:47.818554  475694 type.go:168] "Request Body" body=""
	I1216 04:31:47.818627  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:47.818943  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.318744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:48.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:31:48.818531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:48.818844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:49.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.318533  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:49.318926  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:49.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:31:49.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:49.818832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.318907  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:50.818617  475694 type.go:168] "Request Body" body=""
	I1216 04:31:50.818699  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:50.819034  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:51.318728  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.318799  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.319084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:51.319133  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:51.819260  475694 type.go:168] "Request Body" body=""
	I1216 04:31:51.819337  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:51.819646  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.319367  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.319460  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.319796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:52.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:31:52.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:52.818735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.318406  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.318824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:53.818542  475694 type.go:168] "Request Body" body=""
	I1216 04:31:53.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:53.818932  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:53.818988  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:54.318422  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.318498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:54.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:31:54.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:54.818816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.318417  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.318540  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:55.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:55.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:55.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:56.318390  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.318813  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:56.318866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:56.818718  475694 type.go:168] "Request Body" body=""
	I1216 04:31:56.818805  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:56.819146  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.318413  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.318738  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:57.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:31:57.818490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:57.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:58.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.319808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:31:58.319866  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:31:58.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:31:58.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:58.818811  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:31:59.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:31:59.818539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:31:59.818868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.318393  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.318804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:00.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:32:00.818504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:00.818841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:00.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:01.318351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.318435  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:01.818924  475694 type.go:168] "Request Body" body=""
	I1216 04:32:01.819034  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:01.819306  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.319090  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:02.819160  475694 type.go:168] "Request Body" body=""
	I1216 04:32:02.819236  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:02.819573  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:02.819634  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.318419  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:03.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:03.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:03.818850  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:04.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:04.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:04.818766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:05.318488  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.318585  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.318952  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:05.319013  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:05.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:32:05.818766  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:05.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.318837  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.318913  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.319181  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:06.819150  475694 type.go:168] "Request Body" body=""
	I1216 04:32:06.819232  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:06.819586  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:07.319256  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.319343  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.319687  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:07.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:07.819375  475694 type.go:168] "Request Body" body=""
	I1216 04:32:07.819456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:07.819717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:08.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:08.818488  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:08.818845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.318495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:09.818410  475694 type.go:168] "Request Body" body=""
	I1216 04:32:09.818492  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:09.818839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:09.818896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:10.318578  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.318664  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.319047  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:10.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:32:10.818852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:10.819114  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.318821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:11.819011  475694 type.go:168] "Request Body" body=""
	I1216 04:32:11.819097  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:11.819452  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:11.819512  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:12.319053  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.319128  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.319419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:12.819173  475694 type.go:168] "Request Body" body=""
	I1216 04:32:12.819252  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:12.819584  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.319194  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.319275  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.319589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:13.819219  475694 type.go:168] "Request Body" body=""
	I1216 04:32:13.819286  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:13.819552  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:13.819595  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:14.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.319816  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:14.818518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:14.818598  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:14.818951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:15.818367  475694 type.go:168] "Request Body" body=""
	I1216 04:32:15.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:16.318368  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.318450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:16.318842  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:16.818645  475694 type.go:168] "Request Body" body=""
	I1216 04:32:16.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:16.818981  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.318766  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:17.818482  475694 type.go:168] "Request Body" body=""
	I1216 04:32:17.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:17.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:18.318566  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.318640  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.318945  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:18.319006  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:18.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:18.818516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:18.818842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.318516  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:19.819341  475694 type.go:168] "Request Body" body=""
	I1216 04:32:19.819415  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:19.819722  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.318384  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.318801  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:20.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:32:20.818494  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:20.818869  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:20.818924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:21.318563  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.318632  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.318896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:21.818868  475694 type.go:168] "Request Body" body=""
	I1216 04:32:21.818945  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:21.819262  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.318832  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.318939  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.319249  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:22.818805  475694 type.go:168] "Request Body" body=""
	I1216 04:32:22.818880  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:22.819174  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:22.819224  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:23.318762  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.318839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.319185  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:23.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:32:23.819074  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:23.819390  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.319208  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.319468  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:24.819344  475694 type.go:168] "Request Body" body=""
	I1216 04:32:24.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:24.819753  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:24.819813  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:25.318436  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.318844  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:25.818411  475694 type.go:168] "Request Body" body=""
	I1216 04:32:25.818489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:25.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:26.818705  475694 type.go:168] "Request Body" body=""
	I1216 04:32:26.818789  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:26.819111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:27.318783  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.318852  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.319112  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:27.319155  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:27.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:32:27.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:27.818848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.318875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:28.818407  475694 type.go:168] "Request Body" body=""
	I1216 04:32:28.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:28.818822  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.318518  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.318617  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:29.818649  475694 type.go:168] "Request Body" body=""
	I1216 04:32:29.818733  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:29.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:29.819143  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:30.318804  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.318881  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:30.818908  475694 type.go:168] "Request Body" body=""
	I1216 04:32:30.818985  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:30.819365  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.319128  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.319211  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.319551  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:31.818625  475694 type.go:168] "Request Body" body=""
	I1216 04:32:31.818715  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:31.819005  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:32.318377  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.318779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:32.318830  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:32.818478  475694 type.go:168] "Request Body" body=""
	I1216 04:32:32.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:32.818890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:32:33.818487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:33.818835  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:34.318540  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.318621  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.318936  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:34.318997  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:34.818434  475694 type.go:168] "Request Body" body=""
	I1216 04:32:34.818510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:34.818779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.318531  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.318863  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:35.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:32:35.818530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:35.818878  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:36.318556  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.318986  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:36.319033  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:36.818822  475694 type.go:168] "Request Body" body=""
	I1216 04:32:36.818905  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:36.819233  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.319068  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.319154  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.319493  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:37.819197  475694 type.go:168] "Request Body" body=""
	I1216 04:32:37.819270  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:37.819602  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:38.319373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.319452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:38.319827  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:38.818447  475694 type.go:168] "Request Body" body=""
	I1216 04:32:38.818527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:38.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.318937  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:39.818658  475694 type.go:168] "Request Body" body=""
	I1216 04:32:39.818731  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:39.819050  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.318767  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.319183  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:40.818948  475694 type.go:168] "Request Body" body=""
	I1216 04:32:40.819022  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:40.819278  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:40.819323  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:41.319042  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.319117  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.319435  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:41.818623  475694 type.go:168] "Request Body" body=""
	I1216 04:32:41.818705  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:41.819037  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.318792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:42.818437  475694 type.go:168] "Request Body" body=""
	I1216 04:32:42.818515  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:42.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:43.318459  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.318887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:43.318945  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:43.819357  475694 type.go:168] "Request Body" body=""
	I1216 04:32:43.819431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:43.819742  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.318455  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.318551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.318871  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:44.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:32:44.818656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:44.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:45.318689  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.318765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.319069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:45.319110  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:45.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:32:45.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:45.818854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.318349  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.318433  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:46.818690  475694 type.go:168] "Request Body" body=""
	I1216 04:32:46.818773  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:46.819032  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:47.818472  475694 type.go:168] "Request Body" body=""
	I1216 04:32:47.818551  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:47.818924  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:47.818986  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:48.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.319456  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.319715  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:48.818461  475694 type.go:168] "Request Body" body=""
	I1216 04:32:48.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:48.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.319359  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.319434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.319757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:49.819351  475694 type.go:168] "Request Body" body=""
	I1216 04:32:49.819434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:49.819700  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:49.819743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:50.318399  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.318483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:50.818463  475694 type.go:168] "Request Body" body=""
	I1216 04:32:50.818546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:50.818880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.318426  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.318508  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.318785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:51.818955  475694 type.go:168] "Request Body" body=""
	I1216 04:32:51.819039  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:51.819431  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:52.319209  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.319637  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:52.319692  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:52.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:32:52.818449  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:52.818711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.318405  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.318481  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:53.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:32:53.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:53.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:54.319380  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.319718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:54.319768  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:54.818452  475694 type.go:168] "Request Body" body=""
	I1216 04:32:54.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:54.818896  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.318601  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.318680  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:55.818723  475694 type.go:168] "Request Body" body=""
	I1216 04:32:55.818804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:55.819074  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.318355  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.318436  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:56.818730  475694 type.go:168] "Request Body" body=""
	I1216 04:32:56.818807  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:56.819167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:56.819227  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:57.318894  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.318969  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.319232  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:57.818968  475694 type.go:168] "Request Body" body=""
	I1216 04:32:57.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:57.819399  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.319214  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.319634  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:58.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:32:58.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:58.819672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:32:58.819714  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:32:59.318342  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.318420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.318754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:32:59.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:32:59.818558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:32:59.818911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.318619  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.319047  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.319356  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:00.819156  475694 type.go:168] "Request Body" body=""
	I1216 04:33:00.819244  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:00.819576  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:01.319425  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.319520  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.319865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:01.319922  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:01.818853  475694 type.go:168] "Request Body" body=""
	I1216 04:33:01.818926  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:01.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.319032  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.319108  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.319434  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:02.819246  475694 type.go:168] "Request Body" body=""
	I1216 04:33:02.819327  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:02.819678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.319320  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.319661  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:03.818365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:03.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:03.818761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:03.818823  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:04.318514  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.318596  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.318928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:04.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:33:04.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:04.818807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:05.818451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:05.818526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:05.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:05.818960  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:06.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.318787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:06.818785  475694 type.go:168] "Request Body" body=""
	I1216 04:33:06.818857  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:06.819145  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.318817  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.318891  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:07.818978  475694 type.go:168] "Request Body" body=""
	I1216 04:33:07.819056  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:07.819319  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:07.819368  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:08.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.319217  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.319580  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:08.819296  475694 type.go:168] "Request Body" body=""
	I1216 04:33:08.819380  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:08.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.318401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.318763  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:09.818441  475694 type.go:168] "Request Body" body=""
	I1216 04:33:09.818517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:09.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:10.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.318527  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:10.318924  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:10.818402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:10.818479  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:10.818769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.318449  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:11.819013  475694 type.go:168] "Request Body" body=""
	I1216 04:33:11.819090  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:11.819424  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:12.319142  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.319221  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.319548  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:12.319601  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:12.819365  475694 type.go:168] "Request Body" body=""
	I1216 04:33:12.819440  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:12.819754  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.318386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.318466  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:13.819148  475694 type.go:168] "Request Body" body=""
	I1216 04:33:13.819223  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:13.819475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:14.319233  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.319312  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.319642  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:14.319694  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:14.819321  475694 type.go:168] "Request Body" body=""
	I1216 04:33:14.819398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:14.819744  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.318773  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:15.818455  475694 type.go:168] "Request Body" body=""
	I1216 04:33:15.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:15.818883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.318633  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.318712  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.319023  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:16.818735  475694 type.go:168] "Request Body" body=""
	I1216 04:33:16.818806  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:16.819070  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:16.819115  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:17.318776  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.318851  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.319191  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:17.818960  475694 type.go:168] "Request Body" body=""
	I1216 04:33:17.819042  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:17.819386  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.319157  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:18.819267  475694 type.go:168] "Request Body" body=""
	I1216 04:33:18.819339  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:18.819652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:18.819699  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:19.319379  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.319454  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.319785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:19.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:33:19.818428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:19.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.318802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:20.818454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:20.818529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:20.818885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:21.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.318772  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:21.318812  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:21.818938  475694 type.go:168] "Request Body" body=""
	I1216 04:33:21.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:21.819385  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.319177  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.319262  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.319560  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:22.819291  475694 type.go:168] "Request Body" body=""
	I1216 04:33:22.819372  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:22.819640  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:23.319349  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.319428  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.319751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:23.319801  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:23.818459  475694 type.go:168] "Request Body" body=""
	I1216 04:33:23.818541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:23.818861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.318408  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:24.818401  475694 type.go:168] "Request Body" body=""
	I1216 04:33:24.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:24.818792  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.318944  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:25.818409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:25.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:25.818745  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:25.818786  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:26.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:26.818684  475694 type.go:168] "Request Body" body=""
	I1216 04:33:26.818758  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:26.819084  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.318750  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.318819  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.319109  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:27.818989  475694 type.go:168] "Request Body" body=""
	I1216 04:33:27.819067  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:27.819405  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:27.819467  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:28.319223  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.319304  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.319635  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:28.819335  475694 type.go:168] "Request Body" body=""
	I1216 04:33:28.819403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:28.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.319416  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.319496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.319818  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:29.818393  475694 type.go:168] "Request Body" body=""
	I1216 04:33:29.818474  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:29.818789  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:30.318337  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.318409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.318735  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:30.318791  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:30.818464  475694 type.go:168] "Request Body" body=""
	I1216 04:33:30.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:30.818923  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.318395  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.318757  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:31.818850  475694 type.go:168] "Request Body" body=""
	I1216 04:33:31.818935  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:31.819244  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:32.319014  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.319087  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.319396  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:32.319454  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:32.819204  475694 type.go:168] "Request Body" body=""
	I1216 04:33:32.819281  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:32.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.318343  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.318412  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.318673  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:33.818346  475694 type.go:168] "Request Body" body=""
	I1216 04:33:33.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:33.818774  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.318496  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.318588  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.318954  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:34.818515  475694 type.go:168] "Request Body" body=""
	I1216 04:33:34.818592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:34.818900  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:34.818954  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:35.318474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.318547  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.318865  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:35.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:35.818522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:35.818838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.319305  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.319382  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.319641  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:36.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:36.818685  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:36.819006  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:36.819059  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:37.318444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.319017  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:37.819328  475694 type.go:168] "Request Body" body=""
	I1216 04:33:37.819394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:37.819638  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.318866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:38.818571  475694 type.go:168] "Request Body" body=""
	I1216 04:33:38.818700  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:38.819026  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:38.819078  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:39.318710  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.318778  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.319044  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:39.819409  475694 type.go:168] "Request Body" body=""
	I1216 04:33:39.819485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:39.819829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.318446  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.318839  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:40.818412  475694 type.go:168] "Request Body" body=""
	I1216 04:33:40.818486  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:40.818796  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:41.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.318852  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:41.318906  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:41.819057  475694 type.go:168] "Request Body" body=""
	I1216 04:33:41.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:41.819499  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.319351  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.319425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.319803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:42.818567  475694 type.go:168] "Request Body" body=""
	I1216 04:33:42.818642  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:42.818971  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:43.318717  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.318804  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.319182  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:43.319246  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:43.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:33:43.819063  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:43.819321  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.318768  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.318846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.319210  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:44.819022  475694 type.go:168] "Request Body" body=""
	I1216 04:33:44.819099  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:44.819428  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:45.319171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.319254  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.319544  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:45.319590  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:45.819402  475694 type.go:168] "Request Body" body=""
	I1216 04:33:45.819476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:45.819848  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.318583  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.318665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.319025  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:46.818778  475694 type.go:168] "Request Body" body=""
	I1216 04:33:46.818846  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:46.819141  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.318533  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.318611  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.318979  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:47.818444  475694 type.go:168] "Request Body" body=""
	I1216 04:33:47.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:47.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:47.818943  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:48.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.318485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.318751  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:48.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:33:48.818563  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:48.818990  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.318697  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.319111  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:49.818787  475694 type.go:168] "Request Body" body=""
	I1216 04:33:49.818863  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:49.819129  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:49.819172  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:50.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:50.818606  475694 type.go:168] "Request Body" body=""
	I1216 04:33:50.818682  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:50.819022  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.318712  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.318781  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.319167  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:51.819070  475694 type.go:168] "Request Body" body=""
	I1216 04:33:51.819144  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:51.819478  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:51.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:52.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.319323  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.319652  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:52.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:33:52.819441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:52.819761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.318435  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.318511  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.318783  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:53.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:33:53.818549  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:53.818887  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:54.319385  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.319453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:54.319744  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:54.818347  475694 type.go:168] "Request Body" body=""
	I1216 04:33:54.818422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:54.818747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.318483  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.318963  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:55.818650  475694 type.go:168] "Request Body" body=""
	I1216 04:33:55.818724  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:55.819014  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.318842  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:56.818765  475694 type.go:168] "Request Body" body=""
	I1216 04:33:56.818843  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:56.819221  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:56.819280  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:57.318987  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.319070  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.319350  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:57.819171  475694 type.go:168] "Request Body" body=""
	I1216 04:33:57.819249  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:57.819603  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.319386  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.319472  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.319778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:58.819329  475694 type.go:168] "Request Body" body=""
	I1216 04:33:58.819413  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:58.819741  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:33:58.819797  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:33:59.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.318517  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.318860  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:33:59.818440  475694 type.go:168] "Request Body" body=""
	I1216 04:33:59.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:33:59.818866  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.328767  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.328849  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.329179  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:00.819012  475694 type.go:168] "Request Body" body=""
	I1216 04:34:00.819093  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:00.819419  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:01.319182  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.319271  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.319631  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:01.319685  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:01.818692  475694 type.go:168] "Request Body" body=""
	I1216 04:34:01.818765  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:01.819031  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.318365  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.318443  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.318747  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:02.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:34:02.818471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:02.818800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.318678  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:03.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:34:03.818431  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:03.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:03.818824  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:04.319347  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.319423  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.319769  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:04.818540  475694 type.go:168] "Request Body" body=""
	I1216 04:34:04.818608  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:04.818855  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.318534  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.318911  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:05.818481  475694 type.go:168] "Request Body" body=""
	I1216 04:34:05.818570  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:05.818899  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:05.818957  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:06.319348  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.319422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.319689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:06.818777  475694 type.go:168] "Request Body" body=""
	I1216 04:34:06.818855  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:06.819214  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.319022  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.319101  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.319438  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:07.819176  475694 type.go:168] "Request Body" body=""
	I1216 04:34:07.819248  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:07.819494  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:07.819532  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:08.319248  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.319324  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.319660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:08.819334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:08.819414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:08.819748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.318487  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.318728  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:09.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:34:09.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:09.818787  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:10.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.318526  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.318882  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:10.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:10.819332  475694 type.go:168] "Request Body" body=""
	I1216 04:34:10.819407  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:10.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.318375  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.318447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.318755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:11.818996  475694 type.go:168] "Request Body" body=""
	I1216 04:34:11.819077  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:11.819410  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:12.319155  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.319226  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.319475  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:12.319519  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:12.819271  475694 type.go:168] "Request Body" body=""
	I1216 04:34:12.819346  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:12.819689  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.318462  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.318793  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:13.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:34:13.818559  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:13.818826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.318453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.318535  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.318885  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:14.818564  475694 type.go:168] "Request Body" body=""
	I1216 04:34:14.818639  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:14.818968  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:14.819021  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:15.318668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.318742  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.319003  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:15.818382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:15.818461  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:15.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.318445  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.318521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:16.818753  475694 type.go:168] "Request Body" body=""
	I1216 04:34:16.818825  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:16.819126  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:16.819186  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:17.318469  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.318558  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.318854  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:17.818418  475694 type.go:168] "Request Body" body=""
	I1216 04:34:17.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:17.818784  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.318425  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.318500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.318756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:18.818343  475694 type.go:168] "Request Body" body=""
	I1216 04:34:18.818425  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:18.818802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:19.318462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.318541  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.318861  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:19.318915  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:19.818577  475694 type.go:168] "Request Body" body=""
	I1216 04:34:19.818646  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:19.818927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.318439  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.318833  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:20.818433  475694 type.go:168] "Request Body" body=""
	I1216 04:34:20.818521  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:21.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.319430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.319702  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:21.319743  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:21.818995  475694 type.go:168] "Request Body" body=""
	I1216 04:34:21.819068  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:21.819437  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.319208  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.319287  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.319613  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:22.819318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:22.819390  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:22.819643  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.318344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.318422  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.318762  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:23.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:23.818537  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:23.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:23.818927  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:24.318334  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.318402  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.318670  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:24.818364  475694 type.go:168] "Request Body" body=""
	I1216 04:34:24.818442  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:24.818790  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.318379  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.318455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:25.818514  475694 type.go:168] "Request Body" body=""
	I1216 04:34:25.818579  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:25.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:26.318398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.318476  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.318806  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:26.318858  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:26.818668  475694 type.go:168] "Request Body" body=""
	I1216 04:34:26.818748  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:26.819069  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.319360  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.319437  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.319709  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:27.818413  475694 type.go:168] "Request Body" body=""
	I1216 04:34:27.818495  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:27.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:28.318554  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.318636  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.318951  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:28.319002  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:28.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:34:28.818493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:28.818750  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.319398  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.319469  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.319795  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:29.818453  475694 type.go:168] "Request Body" body=""
	I1216 04:34:29.818532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:29.818867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:30.319342  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.319416  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:30.319711  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:30.818394  475694 type.go:168] "Request Body" body=""
	I1216 04:34:30.818480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:30.818849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:31.818933  475694 type.go:168] "Request Body" body=""
	I1216 04:34:31.819001  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:31.819258  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.319093  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.319167  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.319503  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:32.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:32.819401  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:32.819759  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:32.819825  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:33.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.318582  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.318841  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:33.818458  475694 type.go:168] "Request Body" body=""
	I1216 04:34:33.818536  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:33.818889  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.318460  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.318539  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.318890  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:34.818406  475694 type.go:168] "Request Body" body=""
	I1216 04:34:34.818484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:34.818755  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:35.318438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.318523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.318826  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:35.318869  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:35.818405  475694 type.go:168] "Request Body" body=""
	I1216 04:34:35.818477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:35.818828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.318423  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.318497  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.318761  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:36.818896  475694 type.go:168] "Request Body" body=""
	I1216 04:34:36.818970  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:36.819296  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:37.318456  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.318532  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.318915  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:37.318974  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:37.818620  475694 type.go:168] "Request Body" body=""
	I1216 04:34:37.818687  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:37.818946  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.318430  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.318862  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:38.818581  475694 type.go:168] "Request Body" body=""
	I1216 04:34:38.818653  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:38.818976  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:39.319318  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.319398  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.319717  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:39.319766  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:39.819368  475694 type.go:168] "Request Body" body=""
	I1216 04:34:39.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:39.819802  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.319399  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.319478  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.319815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:40.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:34:40.819458  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:40.819720  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.318432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.318502  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.318828  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:41.818913  475694 type.go:168] "Request Body" body=""
	I1216 04:34:41.818984  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:41.819332  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:41.819390  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:42.319148  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.319222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.319522  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:42.819320  475694 type.go:168] "Request Body" body=""
	I1216 04:34:42.819397  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:42.819739  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.318412  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.319081  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:43.818686  475694 type.go:168] "Request Body" body=""
	I1216 04:34:43.818751  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:43.819000  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:44.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.318800  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:44.318860  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:44.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:44.818518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:44.818902  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.319407  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.319489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.319845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:45.818371  475694 type.go:168] "Request Body" body=""
	I1216 04:34:45.818447  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:45.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:46.318536  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.318624  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.318974  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:46.319036  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:46.818922  475694 type.go:168] "Request Body" body=""
	I1216 04:34:46.819000  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:46.819277  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.319079  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.319153  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.319486  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:47.819266  475694 type.go:168] "Request Body" body=""
	I1216 04:34:47.819341  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:47.819660  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:48.319327  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.319403  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.319723  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:48.319773  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:48.818362  475694 type.go:168] "Request Body" body=""
	I1216 04:34:48.818441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:48.818771  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.318493  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.318566  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.318886  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:49.818551  475694 type.go:168] "Request Body" body=""
	I1216 04:34:49.818618  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:49.818873  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.318400  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.318482  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.318812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:50.818522  475694 type.go:168] "Request Body" body=""
	I1216 04:34:50.818600  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:50.818928  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:50.818980  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:51.318625  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.318702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.319079  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:51.819046  475694 type.go:168] "Request Body" body=""
	I1216 04:34:51.819123  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:51.819663  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.319344  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.319417  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.319779  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:52.818421  475694 type.go:168] "Request Body" body=""
	I1216 04:34:52.818496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:52.818829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:53.318447  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.318845  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:53.318897  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:53.818432  475694 type.go:168] "Request Body" body=""
	I1216 04:34:53.818506  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:53.818834  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.319276  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.319352  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.319592  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:54.819372  475694 type.go:168] "Request Body" body=""
	I1216 04:34:54.819451  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:54.819794  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.318383  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.318468  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.318798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:55.818467  475694 type.go:168] "Request Body" body=""
	I1216 04:34:55.818538  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:55.818798  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:55.818839  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:56.318396  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.318467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.318799  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:56.818695  475694 type.go:168] "Request Body" body=""
	I1216 04:34:56.818770  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:56.819054  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.318729  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.318810  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.319103  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:57.818438  475694 type.go:168] "Request Body" body=""
	I1216 04:34:57.818512  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:57.818836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:57.818893  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:34:58.318454  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.318529  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.318867  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:58.818427  475694 type.go:168] "Request Body" body=""
	I1216 04:34:58.818499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:58.818756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.318451  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.318530  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.318870  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:34:59.818462  475694 type.go:168] "Request Body" body=""
	I1216 04:34:59.818542  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:34:59.818859  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:34:59.818914  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:00.326681  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.327158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.327589  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:00.818334  475694 type.go:168] "Request Body" body=""
	I1216 04:35:00.818414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:00.818768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.318487  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.318573  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.318953  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:01.818952  475694 type.go:168] "Request Body" body=""
	I1216 04:35:01.819020  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:01.819285  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:01.819326  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:02.319143  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.319233  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.319559  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:02.819407  475694 type.go:168] "Request Body" body=""
	I1216 04:35:02.819477  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:02.819810  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.318360  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.318434  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.318682  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:03.818469  475694 type.go:168] "Request Body" body=""
	I1216 04:35:03.818556  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:03.818922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:04.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.318553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.318846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:04.318896  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:04.818557  475694 type.go:168] "Request Body" body=""
	I1216 04:35:04.818626  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:04.818950  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.318442  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.318519  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.318874  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:05.818589  475694 type.go:168] "Request Body" body=""
	I1216 04:35:05.818665  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:05.819015  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.318491  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.318748  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:06.818795  475694 type.go:168] "Request Body" body=""
	I1216 04:35:06.818876  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:06.819216  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:06.819271  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:07.319075  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.319158  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.319501  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:07.819216  475694 type.go:168] "Request Body" body=""
	I1216 04:35:07.819290  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:07.819547  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.319297  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.319373  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.319684  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:08.819382  475694 type.go:168] "Request Body" body=""
	I1216 04:35:08.819455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:08.819785  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:08.819836  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:09.318416  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.318808  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:09.818352  475694 type.go:168] "Request Body" body=""
	I1216 04:35:09.818429  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:09.818778  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.318414  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.318493  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.318815  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:10.818431  475694 type.go:168] "Request Body" body=""
	I1216 04:35:10.818498  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:10.818758  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:11.318468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.318548  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.318880  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:11.318937  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:11.818967  475694 type.go:168] "Request Body" body=""
	I1216 04:35:11.819040  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:11.819370  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.318986  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.319065  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.319377  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:12.819142  475694 type.go:168] "Request Body" body=""
	I1216 04:35:12.819222  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:12.819598  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:13.319413  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.319499  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.319864  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:13.319929  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:13.818336  475694 type.go:168] "Request Body" body=""
	I1216 04:35:13.818409  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:13.818718  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.318496  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.318831  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:14.818415  475694 type.go:168] "Request Body" body=""
	I1216 04:35:14.818500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:14.818819  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.318419  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.318513  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.318797  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:15.818468  475694 type.go:168] "Request Body" body=""
	I1216 04:35:15.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:15.818910  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:15.818967  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:16.318443  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.318518  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.318843  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:16.818768  475694 type.go:168] "Request Body" body=""
	I1216 04:35:16.818839  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:16.819094  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.318429  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.318503  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.318829  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:17.818390  475694 type.go:168] "Request Body" body=""
	I1216 04:35:17.818465  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:17.818786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:18.318479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.318546  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.318807  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:18.318849  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:18.818373  475694 type.go:168] "Request Body" body=""
	I1216 04:35:18.818453  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:18.818776  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.318510  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.318592  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:19.818621  475694 type.go:168] "Request Body" body=""
	I1216 04:35:19.818702  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:19.818973  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:20.318397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.318480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:20.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:20.818426  475694 type.go:168] "Request Body" body=""
	I1216 04:35:20.818507  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:20.818837  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.318544  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.318656  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.318922  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:21.819053  475694 type.go:168] "Request Body" body=""
	I1216 04:35:21.819131  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:21.819472  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:22.319274  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.319345  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.319672  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:22.319728  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:22.818396  475694 type.go:168] "Request Body" body=""
	I1216 04:35:22.818467  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:22.818895  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.318440  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.318522  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.318836  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:23.818345  475694 type.go:168] "Request Body" body=""
	I1216 04:35:23.818420  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:23.818765  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.319370  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.319441  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.319704  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:24.818474  475694 type.go:168] "Request Body" body=""
	I1216 04:35:24.818553  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:24.818904  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:24.818962  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:25.318346  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.318430  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.318768  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:25.819340  475694 type.go:168] "Request Body" body=""
	I1216 04:35:25.819421  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:25.819694  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.319409  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.319480  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.319786  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:26.818711  475694 type.go:168] "Request Body" body=""
	I1216 04:35:26.818786  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:26.819098  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:26.819158  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:27.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.318489  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.318803  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:27.818479  475694 type.go:168] "Request Body" body=""
	I1216 04:35:27.818557  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:27.818881  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.318434  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.318510  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.318832  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:28.818494  475694 type.go:168] "Request Body" body=""
	I1216 04:35:28.818562  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:28.818812  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:29.318411  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.318484  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.318838  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:29.318892  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:29.818377  475694 type.go:168] "Request Body" body=""
	I1216 04:35:29.818455  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:29.818804  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.319321  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.319394  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.319671  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:30.818400  475694 type.go:168] "Request Body" body=""
	I1216 04:35:30.818475  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:30.818821  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:31.318538  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.318610  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.318926  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:31.318982  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:31.819068  475694 type.go:168] "Request Body" body=""
	I1216 04:35:31.819136  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:31.819402  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.319162  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.319242  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.319568  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:32.819397  475694 type.go:168] "Request Body" body=""
	I1216 04:35:32.819471  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:32.819805  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.318420  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.318490  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.318749  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:33.818404  475694 type.go:168] "Request Body" body=""
	I1216 04:35:33.818483  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:33.818824  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:33.818882  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:34.318388  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.318473  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.318868  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:34.819425  475694 type.go:168] "Request Body" body=""
	I1216 04:35:34.819500  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:34.819756  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.318461  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.318545  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.318883  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:35.818350  475694 type.go:168] "Request Body" body=""
	I1216 04:35:35.818457  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:35.818780  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:36.319383  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.319450  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.319711  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:36.319751  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:36.818719  475694 type.go:168] "Request Body" body=""
	I1216 04:35:36.818823  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:36.819149  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.318863  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.318957  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.319340  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:37.819103  475694 type.go:168] "Request Body" body=""
	I1216 04:35:37.819178  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:37.819440  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.318528  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.318602  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.318927  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:38.818449  475694 type.go:168] "Request Body" body=""
	I1216 04:35:38.818523  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:38.818875  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:38.818930  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:39.318332  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.318414  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.318736  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:39.818477  475694 type.go:168] "Request Body" body=""
	I1216 04:35:39.818550  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:39.818846  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.318380  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.318452  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.318777  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:40.818480  475694 type.go:168] "Request Body" body=""
	I1216 04:35:40.818560  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:40.818825  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:41.318437  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.318524  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.318879  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1216 04:35:41.318931  475694 node_ready.go:55] error getting node "functional-763073" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-763073": dial tcp 192.168.49.2:8441: connect: connection refused
	I1216 04:35:41.818408  475694 type.go:168] "Request Body" body=""
	I1216 04:35:41.818485  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:41.818817  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.319418  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.319504  475694 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-763073" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1216 04:35:42.319849  475694 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1216 04:35:42.818357  475694 type.go:168] "Request Body" body=""
	I1216 04:35:42.818432  475694 node_ready.go:38] duration metric: took 6m0.000197669s for node "functional-763073" to be "Ready" ...
	I1216 04:35:42.821511  475694 out.go:203] 
	W1216 04:35:42.824400  475694 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 04:35:42.824420  475694 out.go:285] * 
	W1216 04:35:42.826578  475694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:35:42.829442  475694 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:35:51 functional-763073 crio[5388]: time="2025-12-16T04:35:51.644592632Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=991b5227-f44c-4be8-8368-76a81108b71f name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703121678Z" level=info msg="Checking image status: minikube-local-cache-test:functional-763073" id=ce6da041-c693-4ed8-8f67-0e2dfa5f474c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703334161Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703408098Z" level=info msg="Image minikube-local-cache-test:functional-763073 not found" id=ce6da041-c693-4ed8-8f67-0e2dfa5f474c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.703501522Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-763073 found" id=ce6da041-c693-4ed8-8f67-0e2dfa5f474c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.729422385Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-763073" id=9c3b6678-6461-4136-9494-36a2f286b515 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.729581123Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-763073 not found" id=9c3b6678-6461-4136-9494-36a2f286b515 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.729629378Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-763073 found" id=9c3b6678-6461-4136-9494-36a2f286b515 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.751808181Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-763073" id=f9a21605-057b-4ce8-98f7-3f87460344d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.751965918Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-763073 not found" id=f9a21605-057b-4ce8-98f7-3f87460344d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:52 functional-763073 crio[5388]: time="2025-12-16T04:35:52.752023272Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-763073 found" id=f9a21605-057b-4ce8-98f7-3f87460344d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:53 functional-763073 crio[5388]: time="2025-12-16T04:35:53.723362934Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1254b668-94d1-4907-b41e-bfc70228cac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.05968147Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b570985a-7c53-4562-aefb-ce4eaac2ce51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.059823084Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b570985a-7c53-4562-aefb-ce4eaac2ce51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.059872627Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b570985a-7c53-4562-aefb-ce4eaac2ce51 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.588213855Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=daeb8e9e-5767-459f-8fa3-2d940dcac344 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.588363969Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=daeb8e9e-5767-459f-8fa3-2d940dcac344 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.588402181Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=daeb8e9e-5767-459f-8fa3-2d940dcac344 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.612801355Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1df86d52-c634-46b1-b725-aac31e36969b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.613140189Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1df86d52-c634-46b1-b725-aac31e36969b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.613186204Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1df86d52-c634-46b1-b725-aac31e36969b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.64335223Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a4a54ea3-fab4-4dfc-8131-481d19907593 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.643515965Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a4a54ea3-fab4-4dfc-8131-481d19907593 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:54 functional-763073 crio[5388]: time="2025-12-16T04:35:54.643556113Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a4a54ea3-fab4-4dfc-8131-481d19907593 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:35:55 functional-763073 crio[5388]: time="2025-12-16T04:35:55.219732281Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8e58da4f-b02a-4dba-9994-86e50eea8261 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:35:59.143229    9568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:59.143963    9568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:59.145606    9568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:59.146116    9568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:35:59.147651    9568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:35:59 up  3:18,  0 user,  load average: 0.86, 0.42, 0.82
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:35:56 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:57 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 16 04:35:57 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:57 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:57 functional-763073 kubelet[9443]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:57 functional-763073 kubelet[9443]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:57 functional-763073 kubelet[9443]: E1216 04:35:57.402951    9443 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:57 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:57 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:58 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 16 04:35:58 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:58 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:58 functional-763073 kubelet[9466]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:58 functional-763073 kubelet[9466]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:58 functional-763073 kubelet[9466]: E1216 04:35:58.117958    9466 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:58 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:58 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:35:58 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1159.
	Dec 16 04:35:58 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:58 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:35:58 functional-763073 kubelet[9492]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:58 functional-763073 kubelet[9492]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:35:58 functional-763073 kubelet[9492]: E1216 04:35:58.875191    9492 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:35:58 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:35:58 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (588.354598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-763073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 04:38:22.217219  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:40:24.309601  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:47.375840  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:43:22.219512  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:24.307898  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-763073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.158530748s)

                                                
                                                
-- stdout --
	* [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001213504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-763073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.159880341s for "functional-763073" cluster.
I1216 04:48:14.569659  441727 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (334.343791ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-861171 image ls --format json --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format table --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls                                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ delete         │ -p functional-861171                                                                                                                              │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ start          │ -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ start          │ -p functional-763073 --alsologtostderr -v=8                                                                                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │                     │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:latest                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add minikube-local-cache-test:functional-763073                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache delete minikube-local-cache-test:functional-763073                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl images                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ cache          │ functional-763073 cache reload                                                                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ kubectl        │ functional-763073 kubectl -- --context functional-763073 get pods                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ start          │ -p functional-763073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:36 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:36:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:36:00.490248  481598 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:36:00.490394  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490398  481598 out.go:374] Setting ErrFile to fd 2...
	I1216 04:36:00.490402  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490827  481598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:36:00.491840  481598 out.go:368] Setting JSON to false
	I1216 04:36:00.492932  481598 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11907,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:36:00.493015  481598 start.go:143] virtualization:  
	I1216 04:36:00.496736  481598 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:36:00.500271  481598 notify.go:221] Checking for updates...
	I1216 04:36:00.500857  481598 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:36:00.504041  481598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:36:00.507246  481598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:36:00.510546  481598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:36:00.513957  481598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:36:00.517802  481598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:36:00.521529  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:00.521658  481598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:36:00.547571  481598 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:36:00.547683  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.612217  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.602438298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.612309  481598 docker.go:319] overlay module found
	I1216 04:36:00.615642  481598 out.go:179] * Using the docker driver based on existing profile
	I1216 04:36:00.618516  481598 start.go:309] selected driver: docker
	I1216 04:36:00.618544  481598 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.618637  481598 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:36:00.618758  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.679148  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.669430398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.679575  481598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:36:00.679604  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:00.679655  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:00.679698  481598 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.682841  481598 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:36:00.685829  481598 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:36:00.688866  481598 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:36:00.691890  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:00.691964  481598 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:36:00.691972  481598 cache.go:65] Caching tarball of preloaded images
	I1216 04:36:00.691982  481598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:36:00.692074  481598 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:36:00.692084  481598 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:36:00.692227  481598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:36:00.712798  481598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:36:00.712810  481598 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:36:00.712824  481598 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:36:00.712856  481598 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:36:00.712923  481598 start.go:364] duration metric: took 39.237µs to acquireMachinesLock for "functional-763073"
	I1216 04:36:00.712941  481598 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:36:00.712958  481598 fix.go:54] fixHost starting: 
	I1216 04:36:00.713253  481598 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:36:00.732242  481598 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:36:00.732263  481598 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:36:00.735664  481598 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:36:00.735723  481598 machine.go:94] provisionDockerMachine start ...
	I1216 04:36:00.735809  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.753493  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.753813  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.753819  481598 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:36:00.888929  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:00.888952  481598 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:36:00.889028  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.908330  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.908643  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.908652  481598 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:36:01.055703  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:01.055772  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.082824  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.083159  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.083173  481598 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:36:01.221846  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:36:01.221862  481598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:36:01.221883  481598 ubuntu.go:190] setting up certificates
	I1216 04:36:01.221900  481598 provision.go:84] configureAuth start
	I1216 04:36:01.221962  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:01.240557  481598 provision.go:143] copyHostCerts
	I1216 04:36:01.240641  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:36:01.240650  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:36:01.240725  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:36:01.240821  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:36:01.240825  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:36:01.240849  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:36:01.240902  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:36:01.240908  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:36:01.240929  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:36:01.240972  481598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:36:01.624943  481598 provision.go:177] copyRemoteCerts
	I1216 04:36:01.624996  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:36:01.625036  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.650668  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:01.753682  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:36:01.770658  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:36:01.788383  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:36:01.805726  481598 provision.go:87] duration metric: took 583.803742ms to configureAuth
	I1216 04:36:01.805744  481598 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:36:01.805933  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:01.806039  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.826667  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.826973  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.826985  481598 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:36:02.160545  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:36:02.160560  481598 machine.go:97] duration metric: took 1.424830052s to provisionDockerMachine
	I1216 04:36:02.160570  481598 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:36:02.160582  481598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:36:02.160662  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:36:02.160707  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.182446  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.281163  481598 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:36:02.284621  481598 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:36:02.284640  481598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:36:02.284650  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:36:02.284704  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:36:02.284795  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:36:02.284876  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:36:02.284919  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:36:02.293096  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:02.311133  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:36:02.329120  481598 start.go:296] duration metric: took 168.535354ms for postStartSetup
	I1216 04:36:02.329220  481598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:36:02.329269  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.348104  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.442235  481598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:36:02.448236  481598 fix.go:56] duration metric: took 1.735283267s for fixHost
	I1216 04:36:02.448253  481598 start.go:83] releasing machines lock for "functional-763073", held for 1.735323136s
	I1216 04:36:02.448324  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:02.466005  481598 ssh_runner.go:195] Run: cat /version.json
	I1216 04:36:02.466044  481598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:36:02.466046  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.466114  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.490975  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.491519  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.685578  481598 ssh_runner.go:195] Run: systemctl --version
	I1216 04:36:02.692865  481598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:36:02.731424  481598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:36:02.735810  481598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:36:02.735877  481598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:36:02.743925  481598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:36:02.743939  481598 start.go:496] detecting cgroup driver to use...
	I1216 04:36:02.743971  481598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:36:02.744017  481598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:36:02.759444  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:36:02.772624  481598 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:36:02.772678  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:36:02.788424  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:36:02.802435  481598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:36:02.920156  481598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:36:03.035227  481598 docker.go:234] disabling docker service ...
	I1216 04:36:03.035430  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:36:03.052008  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:36:03.065420  481598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:36:03.183071  481598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:36:03.294099  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:36:03.311925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:36:03.326859  481598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:36:03.326940  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.336429  481598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:36:03.336497  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.346614  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.357523  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.366947  481598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:36:03.376549  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.385465  481598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.394383  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.404860  481598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:36:03.413465  481598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:36:03.422752  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:03.536676  481598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:36:03.720606  481598 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:36:03.720702  481598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:36:03.724603  481598 start.go:564] Will wait 60s for crictl version
	I1216 04:36:03.724660  481598 ssh_runner.go:195] Run: which crictl
	I1216 04:36:03.728340  481598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:36:03.755140  481598 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:36:03.755232  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.787753  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.823457  481598 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:36:03.826282  481598 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:36:03.843358  481598 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:36:03.850470  481598 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 04:36:03.853320  481598 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:36:03.853444  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:03.853515  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.889904  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.889916  481598 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:36:03.889975  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.917662  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.917679  481598 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:36:03.917686  481598 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:36:03.917785  481598 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:36:03.917879  481598 ssh_runner.go:195] Run: crio config
	I1216 04:36:03.990629  481598 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 04:36:03.990650  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:03.990663  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:03.990677  481598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:36:03.990700  481598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:36:03.990828  481598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:36:03.990905  481598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:36:03.999067  481598 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:36:03.999139  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:36:04.008352  481598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:36:04.030586  481598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:36:04.045153  481598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 04:36:04.060527  481598 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:36:04.065456  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:04.194475  481598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:36:04.817563  481598 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:36:04.817574  481598 certs.go:195] generating shared ca certs ...
	I1216 04:36:04.817590  481598 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:36:04.817743  481598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:36:04.817795  481598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:36:04.817801  481598 certs.go:257] generating profile certs ...
	I1216 04:36:04.817883  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:36:04.817938  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:36:04.817975  481598 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:36:04.818092  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:36:04.818123  481598 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:36:04.818130  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:36:04.818156  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:36:04.818185  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:36:04.818212  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:36:04.818262  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:04.818840  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:36:04.841132  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:36:04.865044  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:36:04.885624  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:36:04.903731  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:36:04.922117  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:36:04.940753  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:36:04.958685  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:36:04.976252  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:36:04.996895  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:36:05.024451  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:36:05.043756  481598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:36:05.056987  481598 ssh_runner.go:195] Run: openssl version
	I1216 04:36:05.063602  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.071513  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:36:05.079286  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083120  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083179  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.124591  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:36:05.132537  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.139980  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:36:05.147726  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151460  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151517  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.192644  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:36:05.200305  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.207653  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:36:05.215074  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218794  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218861  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.260201  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:36:05.267700  481598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:36:05.271723  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:36:05.312770  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:36:05.354108  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:36:05.396136  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:36:05.437154  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:36:05.478283  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:36:05.519503  481598 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:05.519581  481598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:36:05.519651  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.550651  481598 cri.go:89] found id: ""
	I1216 04:36:05.550716  481598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:36:05.558332  481598 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:36:05.558341  481598 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:36:05.558398  481598 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:36:05.566851  481598 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.567385  481598 kubeconfig.go:125] found "functional-763073" server: "https://192.168.49.2:8441"
	I1216 04:36:05.568647  481598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:36:05.577205  481598 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:21:27.024069044 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 04:36:04.056943145 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 04:36:05.577214  481598 kubeadm.go:1161] stopping kube-system containers ...
	I1216 04:36:05.577232  481598 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 04:36:05.577291  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.613634  481598 cri.go:89] found id: ""
	I1216 04:36:05.613693  481598 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 04:36:05.631237  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:36:05.639373  481598 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 04:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 16 04:25 /etc/kubernetes/scheduler.conf
	
	I1216 04:36:05.639436  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:36:05.647869  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:36:05.655663  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.655719  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:36:05.663273  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.671183  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.671243  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.678591  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:36:05.686132  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.686188  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:36:05.693450  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:36:05.701540  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:05.748475  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.491126  481598 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742626292s)
	I1216 04:36:07.491187  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.697669  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.751926  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.807760  481598 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:36:07.807833  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.308888  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.808759  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.308977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.808282  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.307985  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.808951  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.308256  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.808637  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.308024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.307998  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.808659  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.308930  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.808879  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.308001  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.808638  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.308025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.808728  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.308874  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.807914  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.308153  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.808033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.308758  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.808709  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.308226  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.808665  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.308593  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.808198  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.308415  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.808582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.307967  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.808028  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.308762  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.808091  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.308960  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.308423  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.808157  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.308038  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.808057  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.308023  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.808946  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.308972  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.807943  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.307922  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.807937  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.308667  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.808045  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.308212  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.808619  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.308733  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.808032  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.308860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.808072  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.308007  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.808024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.307979  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.808901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.308808  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.808025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.308031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.808882  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.308837  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.807987  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.307961  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.808950  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.308266  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.808923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.308656  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.808860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.308034  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.808867  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.308569  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.307977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.308633  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.808122  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.307944  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.808798  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.308017  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.807983  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.308319  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.807968  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.807982  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.308783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.808921  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.308093  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.808677  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.308049  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.808424  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.308936  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.808179  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.308330  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.808590  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.308098  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.808705  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.308058  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.807911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.308881  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.808413  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.308020  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.808592  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.308911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.808175  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.307995  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.808695  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.808771  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.308033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.808432  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.308848  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.807977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.307980  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.808869  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.308433  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.808830  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.308901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.808015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:07.808111  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:07.837945  481598 cri.go:89] found id: ""
	I1216 04:37:07.837959  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.837965  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:07.837970  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:07.838028  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:07.869351  481598 cri.go:89] found id: ""
	I1216 04:37:07.869366  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.869372  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:07.869377  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:07.869436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:07.907276  481598 cri.go:89] found id: ""
	I1216 04:37:07.907290  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.907297  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:07.907302  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:07.907360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:07.933358  481598 cri.go:89] found id: ""
	I1216 04:37:07.933373  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.933380  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:07.933385  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:07.933443  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:07.960678  481598 cri.go:89] found id: ""
	I1216 04:37:07.960692  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.960699  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:07.960704  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:07.960761  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:07.986399  481598 cri.go:89] found id: ""
	I1216 04:37:07.986414  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.986421  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:07.986426  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:07.986483  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:08.015016  481598 cri.go:89] found id: ""
	I1216 04:37:08.015031  481598 logs.go:282] 0 containers: []
	W1216 04:37:08.015038  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:08.015046  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:08.015057  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:08.088739  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:08.088761  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:08.107036  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:08.107052  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:08.176727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:08.176736  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:08.176749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:08.244460  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:08.244483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:10.772766  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:10.783210  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:10.783271  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:10.811358  481598 cri.go:89] found id: ""
	I1216 04:37:10.811374  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.811382  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:10.811388  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:10.811451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:10.841691  481598 cri.go:89] found id: ""
	I1216 04:37:10.841705  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.841712  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:10.841717  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:10.841792  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:10.869111  481598 cri.go:89] found id: ""
	I1216 04:37:10.869133  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.869141  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:10.869146  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:10.869227  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:10.897617  481598 cri.go:89] found id: ""
	I1216 04:37:10.897632  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.897640  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:10.897646  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:10.897709  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:10.924814  481598 cri.go:89] found id: ""
	I1216 04:37:10.924829  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.924838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:10.924849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:10.924909  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:10.951147  481598 cri.go:89] found id: ""
	I1216 04:37:10.951162  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.951170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:10.951181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:10.951240  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:10.977944  481598 cri.go:89] found id: ""
	I1216 04:37:10.977958  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.977965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:10.977973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:10.977984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:11.046933  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:11.046953  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:11.062324  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:11.062340  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:11.128033  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:11.128044  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:11.128055  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:11.195835  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:11.195855  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:13.729443  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:13.739852  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:13.739911  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:13.765288  481598 cri.go:89] found id: ""
	I1216 04:37:13.765303  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.765310  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:13.765315  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:13.765372  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:13.791619  481598 cri.go:89] found id: ""
	I1216 04:37:13.791634  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.791641  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:13.791646  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:13.791713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:13.829008  481598 cri.go:89] found id: ""
	I1216 04:37:13.829021  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.829028  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:13.829033  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:13.829115  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:13.860708  481598 cri.go:89] found id: ""
	I1216 04:37:13.860722  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.860729  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:13.860734  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:13.860795  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:13.890573  481598 cri.go:89] found id: ""
	I1216 04:37:13.890587  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.890594  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:13.890600  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:13.890659  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:13.921520  481598 cri.go:89] found id: ""
	I1216 04:37:13.921535  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.921543  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:13.921555  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:13.921616  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:13.950847  481598 cri.go:89] found id: ""
	I1216 04:37:13.950864  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.950882  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:13.950890  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:13.950901  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:13.965697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:13.965713  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:14.040284  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:14.040295  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:14.040307  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:14.114244  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:14.114266  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:14.146926  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:14.146942  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.715163  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:16.725607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:16.725688  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:16.751194  481598 cri.go:89] found id: ""
	I1216 04:37:16.751208  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.751215  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:16.751220  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:16.751277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:16.780407  481598 cri.go:89] found id: ""
	I1216 04:37:16.780421  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.780428  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:16.780433  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:16.780496  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:16.806409  481598 cri.go:89] found id: ""
	I1216 04:37:16.806424  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.806431  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:16.806436  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:16.806504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:16.838220  481598 cri.go:89] found id: ""
	I1216 04:37:16.838235  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.838242  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:16.838247  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:16.838306  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:16.866315  481598 cri.go:89] found id: ""
	I1216 04:37:16.866329  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.866336  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:16.866341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:16.866414  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:16.899090  481598 cri.go:89] found id: ""
	I1216 04:37:16.899105  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.899112  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:16.899117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:16.899178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:16.924588  481598 cri.go:89] found id: ""
	I1216 04:37:16.924603  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.924611  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:16.924618  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:16.924630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.993464  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:16.993485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:17.009562  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:17.009582  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:17.075397  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:17.075408  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:17.075421  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:17.144979  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:17.145001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:19.675069  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:19.685090  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:19.685149  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:19.711697  481598 cri.go:89] found id: ""
	I1216 04:37:19.711712  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.711719  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:19.711724  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:19.711781  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:19.737017  481598 cri.go:89] found id: ""
	I1216 04:37:19.737031  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.737038  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:19.737043  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:19.737129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:19.764129  481598 cri.go:89] found id: ""
	I1216 04:37:19.764143  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.764150  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:19.764155  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:19.764210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:19.790063  481598 cri.go:89] found id: ""
	I1216 04:37:19.790077  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.790084  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:19.790098  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:19.790154  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:19.821689  481598 cri.go:89] found id: ""
	I1216 04:37:19.821703  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.821710  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:19.821716  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:19.821774  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:19.854088  481598 cri.go:89] found id: ""
	I1216 04:37:19.854103  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.854111  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:19.854116  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:19.854178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:19.893475  481598 cri.go:89] found id: ""
	I1216 04:37:19.893496  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.893505  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:19.893513  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:19.893524  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:19.961902  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:19.961916  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:19.961927  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:20.031206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:20.031233  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:20.062576  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:20.062596  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:20.132798  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:20.132818  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:22.649716  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:22.659636  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:22.659698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:22.684490  481598 cri.go:89] found id: ""
	I1216 04:37:22.684505  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.684512  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:22.684542  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:22.684599  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:22.709083  481598 cri.go:89] found id: ""
	I1216 04:37:22.709098  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.709105  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:22.709110  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:22.709165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:22.734473  481598 cri.go:89] found id: ""
	I1216 04:37:22.734487  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.734494  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:22.734499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:22.734557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:22.759459  481598 cri.go:89] found id: ""
	I1216 04:37:22.759473  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.759480  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:22.759485  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:22.759540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:22.784416  481598 cri.go:89] found id: ""
	I1216 04:37:22.784430  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.784437  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:22.784442  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:22.784508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:22.808823  481598 cri.go:89] found id: ""
	I1216 04:37:22.808837  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.808844  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:22.808849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:22.808906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:22.845939  481598 cri.go:89] found id: ""
	I1216 04:37:22.845965  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.845973  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:22.845980  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:22.846001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:22.939972  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:22.939998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:22.969984  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:22.970003  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:23.041537  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:23.041560  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:23.059445  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:23.059461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:23.127407  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.628052  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:25.638431  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:25.638504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:25.665151  481598 cri.go:89] found id: ""
	I1216 04:37:25.665164  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.665172  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:25.665176  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:25.665249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:25.695604  481598 cri.go:89] found id: ""
	I1216 04:37:25.695617  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.695625  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:25.695630  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:25.695691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:25.720754  481598 cri.go:89] found id: ""
	I1216 04:37:25.720768  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.720775  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:25.720780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:25.720839  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:25.746771  481598 cri.go:89] found id: ""
	I1216 04:37:25.746785  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.746792  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:25.746797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:25.746857  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:25.776233  481598 cri.go:89] found id: ""
	I1216 04:37:25.776247  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.776264  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:25.776269  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:25.776342  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:25.803891  481598 cri.go:89] found id: ""
	I1216 04:37:25.803914  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.803922  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:25.803927  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:25.804021  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:25.845002  481598 cri.go:89] found id: ""
	I1216 04:37:25.845016  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.845023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:25.845040  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:25.845053  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:25.921736  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.921746  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:25.921757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:25.989735  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:25.989756  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:26.020992  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:26.021012  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:26.094837  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:26.094856  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:28.610236  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:28.620641  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:28.620702  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:28.648449  481598 cri.go:89] found id: ""
	I1216 04:37:28.648463  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.648470  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:28.648480  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:28.648539  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:28.675317  481598 cri.go:89] found id: ""
	I1216 04:37:28.675332  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.675339  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:28.675344  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:28.675402  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:28.700978  481598 cri.go:89] found id: ""
	I1216 04:37:28.700992  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.700998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:28.701003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:28.701104  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:28.726354  481598 cri.go:89] found id: ""
	I1216 04:37:28.726367  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.726374  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:28.726379  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:28.726436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:28.752843  481598 cri.go:89] found id: ""
	I1216 04:37:28.752857  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.752864  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:28.752869  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:28.752927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:28.778190  481598 cri.go:89] found id: ""
	I1216 04:37:28.778205  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.778212  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:28.778217  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:28.778280  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:28.803029  481598 cri.go:89] found id: ""
	I1216 04:37:28.803044  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.803051  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:28.803059  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:28.803070  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:28.896742  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:28.896763  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:28.896776  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:28.964206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:28.964228  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:28.996487  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:28.996503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:29.063978  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:29.063998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:31.580896  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:31.591181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:31.591249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:31.616263  481598 cri.go:89] found id: ""
	I1216 04:37:31.616277  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.616284  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:31.616289  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:31.616345  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:31.641685  481598 cri.go:89] found id: ""
	I1216 04:37:31.641700  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.641707  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:31.641712  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:31.641771  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:31.667472  481598 cri.go:89] found id: ""
	I1216 04:37:31.667487  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.667495  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:31.667500  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:31.667557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:31.697212  481598 cri.go:89] found id: ""
	I1216 04:37:31.697241  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.697248  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:31.697253  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:31.697311  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:31.723185  481598 cri.go:89] found id: ""
	I1216 04:37:31.723199  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.723207  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:31.723212  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:31.723273  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:31.749934  481598 cri.go:89] found id: ""
	I1216 04:37:31.749957  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.749965  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:31.749970  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:31.750035  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:31.776884  481598 cri.go:89] found id: ""
	I1216 04:37:31.776905  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.776911  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:31.776922  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:31.776933  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:31.856147  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:31.856168  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:31.856188  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:31.928187  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:31.928207  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:31.960005  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:31.960023  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:32.031454  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:32.031474  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.550103  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:34.560823  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:34.560882  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:34.587067  481598 cri.go:89] found id: ""
	I1216 04:37:34.587082  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.587092  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:34.587097  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:34.587160  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:34.613934  481598 cri.go:89] found id: ""
	I1216 04:37:34.613949  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.613956  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:34.613961  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:34.614018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:34.639997  481598 cri.go:89] found id: ""
	I1216 04:37:34.640011  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.640018  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:34.640023  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:34.640087  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:34.666140  481598 cri.go:89] found id: ""
	I1216 04:37:34.666154  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.666161  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:34.666166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:34.666226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:34.692116  481598 cri.go:89] found id: ""
	I1216 04:37:34.692131  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.692138  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:34.692143  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:34.692203  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:34.717134  481598 cri.go:89] found id: ""
	I1216 04:37:34.717148  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.717156  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:34.717161  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:34.717228  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:34.743931  481598 cri.go:89] found id: ""
	I1216 04:37:34.743946  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.743963  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:34.743971  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:34.743983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:34.809826  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:34.809849  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.827619  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:34.827636  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:34.903666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:34.903676  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:34.903686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:34.972944  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:34.972967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:37.507549  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:37.517802  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:37.517863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:37.543131  481598 cri.go:89] found id: ""
	I1216 04:37:37.543147  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.543155  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:37.543167  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:37.543224  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:37.568202  481598 cri.go:89] found id: ""
	I1216 04:37:37.568216  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.568223  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:37.568231  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:37.568288  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:37.593976  481598 cri.go:89] found id: ""
	I1216 04:37:37.593991  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.593998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:37.594003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:37.594066  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:37.619760  481598 cri.go:89] found id: ""
	I1216 04:37:37.619774  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.619781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:37.619787  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:37.619848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:37.644836  481598 cri.go:89] found id: ""
	I1216 04:37:37.644850  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.644857  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:37.644862  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:37.644921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:37.670454  481598 cri.go:89] found id: ""
	I1216 04:37:37.670468  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.670476  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:37.670481  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:37.670537  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:37.695742  481598 cri.go:89] found id: ""
	I1216 04:37:37.695762  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.695769  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:37.695777  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:37.695787  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:37.759713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:37.759732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:37.774589  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:37.774606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:37.849933  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:37.849945  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:37.849955  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:37.928468  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:37.928489  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.459800  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:40.470285  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:40.470349  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:40.499380  481598 cri.go:89] found id: ""
	I1216 04:37:40.499394  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.499401  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:40.499406  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:40.499464  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:40.528986  481598 cri.go:89] found id: ""
	I1216 04:37:40.529000  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.529007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:40.529012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:40.529089  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:40.555623  481598 cri.go:89] found id: ""
	I1216 04:37:40.555638  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.555646  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:40.555651  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:40.555708  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:40.581298  481598 cri.go:89] found id: ""
	I1216 04:37:40.581312  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.581319  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:40.581324  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:40.581382  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:40.611085  481598 cri.go:89] found id: ""
	I1216 04:37:40.611099  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.611106  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:40.611113  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:40.611173  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:40.636162  481598 cri.go:89] found id: ""
	I1216 04:37:40.636178  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.636185  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:40.636190  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:40.636250  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:40.664257  481598 cri.go:89] found id: ""
	I1216 04:37:40.664272  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.664279  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:40.664287  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:40.664299  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:40.680011  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:40.680027  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:40.745907  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:40.745919  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:40.745932  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:40.814715  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:40.814735  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.859159  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:40.859181  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.432718  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:43.443193  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:43.443264  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:43.469157  481598 cri.go:89] found id: ""
	I1216 04:37:43.469187  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.469195  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:43.469200  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:43.469323  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:43.494783  481598 cri.go:89] found id: ""
	I1216 04:37:43.494796  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.494804  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:43.494809  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:43.494869  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:43.521488  481598 cri.go:89] found id: ""
	I1216 04:37:43.521502  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.521509  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:43.521514  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:43.521573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:43.550707  481598 cri.go:89] found id: ""
	I1216 04:37:43.550721  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.550728  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:43.550733  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:43.550791  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:43.579977  481598 cri.go:89] found id: ""
	I1216 04:37:43.579991  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.579997  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:43.580002  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:43.580064  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:43.605041  481598 cri.go:89] found id: ""
	I1216 04:37:43.605056  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.605143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:43.605149  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:43.605208  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:43.631632  481598 cri.go:89] found id: ""
	I1216 04:37:43.631658  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.631665  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:43.631672  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:43.631691  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.701085  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:43.701111  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:43.716379  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:43.716401  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:43.778569  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:43.778594  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:43.778606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:43.850663  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:43.850686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.388473  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:46.398649  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:46.398713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:46.425758  481598 cri.go:89] found id: ""
	I1216 04:37:46.425772  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.425780  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:46.425785  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:46.425843  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:46.453363  481598 cri.go:89] found id: ""
	I1216 04:37:46.453377  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.453384  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:46.453389  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:46.453450  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:46.479051  481598 cri.go:89] found id: ""
	I1216 04:37:46.479066  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.479074  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:46.479079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:46.479135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:46.509758  481598 cri.go:89] found id: ""
	I1216 04:37:46.509773  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.509781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:46.509786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:46.509849  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:46.536775  481598 cri.go:89] found id: ""
	I1216 04:37:46.536788  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.536795  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:46.536801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:46.536870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:46.562238  481598 cri.go:89] found id: ""
	I1216 04:37:46.562253  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.562262  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:46.562268  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:46.562326  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:46.588577  481598 cri.go:89] found id: ""
	I1216 04:37:46.588591  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.588598  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:46.588606  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:46.588617  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:46.658427  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:46.658447  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.692280  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:46.692304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:46.758854  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:46.758874  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:46.778062  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:46.778079  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:46.855875  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.357557  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:49.367602  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:49.367665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:49.393022  481598 cri.go:89] found id: ""
	I1216 04:37:49.393037  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.393044  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:49.393049  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:49.393125  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:49.421701  481598 cri.go:89] found id: ""
	I1216 04:37:49.421716  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.421723  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:49.421728  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:49.421789  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:49.447139  481598 cri.go:89] found id: ""
	I1216 04:37:49.447154  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.447161  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:49.447166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:49.447226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:49.472003  481598 cri.go:89] found id: ""
	I1216 04:37:49.472018  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.472026  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:49.472032  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:49.472090  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:49.497762  481598 cri.go:89] found id: ""
	I1216 04:37:49.497782  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.497790  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:49.497794  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:49.497853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:49.527970  481598 cri.go:89] found id: ""
	I1216 04:37:49.527984  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.527992  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:49.527997  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:49.528055  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:49.554573  481598 cri.go:89] found id: ""
	I1216 04:37:49.554587  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.554596  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:49.554604  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:49.554615  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:49.620959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:49.620979  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:49.636096  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:49.636115  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:49.705535  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.705545  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:49.705556  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:49.774081  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:49.774101  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.303119  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:52.313248  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:52.313317  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:52.339092  481598 cri.go:89] found id: ""
	I1216 04:37:52.339106  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.339113  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:52.339118  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:52.339181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:52.370928  481598 cri.go:89] found id: ""
	I1216 04:37:52.370942  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.370949  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:52.370954  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:52.371011  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:52.395986  481598 cri.go:89] found id: ""
	I1216 04:37:52.396000  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.396007  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:52.396012  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:52.396068  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:52.425010  481598 cri.go:89] found id: ""
	I1216 04:37:52.425024  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.425031  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:52.425036  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:52.425118  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:52.450781  481598 cri.go:89] found id: ""
	I1216 04:37:52.450796  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.450803  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:52.450808  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:52.450867  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:52.476589  481598 cri.go:89] found id: ""
	I1216 04:37:52.476603  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.476611  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:52.476617  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:52.476675  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:52.503929  481598 cri.go:89] found id: ""
	I1216 04:37:52.503944  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.503951  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:52.503959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:52.503970  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:52.519124  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:52.519149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:52.587049  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:52.587060  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:52.587072  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:52.657393  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:52.657415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.686271  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:52.686289  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.258225  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:55.268276  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:55.268339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:55.295458  481598 cri.go:89] found id: ""
	I1216 04:37:55.295471  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.295479  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:55.295484  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:55.295550  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:55.322181  481598 cri.go:89] found id: ""
	I1216 04:37:55.322195  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.322202  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:55.322207  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:55.322315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:55.347301  481598 cri.go:89] found id: ""
	I1216 04:37:55.347316  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.347323  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:55.347329  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:55.347390  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:55.372973  481598 cri.go:89] found id: ""
	I1216 04:37:55.372988  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.372995  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:55.373000  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:55.373057  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:55.398159  481598 cri.go:89] found id: ""
	I1216 04:37:55.398173  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.398179  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:55.398184  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:55.398245  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:55.423108  481598 cri.go:89] found id: ""
	I1216 04:37:55.423122  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.423128  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:55.423133  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:55.423198  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:55.449345  481598 cri.go:89] found id: ""
	I1216 04:37:55.449360  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.449367  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:55.449375  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:55.449397  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.514641  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:55.514662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:55.529353  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:55.529369  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:55.598810  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:55.598830  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:55.598842  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:55.666947  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:55.666967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:58.197584  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:58.208946  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:58.209018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:58.234805  481598 cri.go:89] found id: ""
	I1216 04:37:58.234819  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.234826  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:58.234831  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:58.234886  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:58.259158  481598 cri.go:89] found id: ""
	I1216 04:37:58.259171  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.259178  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:58.259183  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:58.259241  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:58.286151  481598 cri.go:89] found id: ""
	I1216 04:37:58.286165  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.286172  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:58.286177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:58.286234  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:58.310737  481598 cri.go:89] found id: ""
	I1216 04:37:58.310750  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.310757  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:58.310762  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:58.310817  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:58.334963  481598 cri.go:89] found id: ""
	I1216 04:37:58.334978  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.334985  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:58.334989  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:58.335054  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:58.363884  481598 cri.go:89] found id: ""
	I1216 04:37:58.363910  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.363918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:58.363924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:58.363992  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:58.387948  481598 cri.go:89] found id: ""
	I1216 04:37:58.387961  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.387968  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:58.387977  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:58.387988  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:58.452873  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:58.452892  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:58.468670  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:58.468688  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:58.537376  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:58.537385  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:58.537396  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:58.606317  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:58.606339  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:01.135427  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:01.146890  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:01.146955  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:01.174260  481598 cri.go:89] found id: ""
	I1216 04:38:01.174275  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.174282  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:01.174287  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:01.174347  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:01.199944  481598 cri.go:89] found id: ""
	I1216 04:38:01.199958  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.199965  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:01.199970  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:01.200033  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:01.228798  481598 cri.go:89] found id: ""
	I1216 04:38:01.228814  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.228820  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:01.228825  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:01.228884  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:01.255775  481598 cri.go:89] found id: ""
	I1216 04:38:01.255789  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.255796  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:01.255801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:01.255860  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:01.281657  481598 cri.go:89] found id: ""
	I1216 04:38:01.281671  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.281678  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:01.281683  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:01.281742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:01.307766  481598 cri.go:89] found id: ""
	I1216 04:38:01.307779  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.307786  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:01.307791  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:01.307851  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:01.333581  481598 cri.go:89] found id: ""
	I1216 04:38:01.333595  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.333602  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:01.333610  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:01.333621  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:01.399337  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:01.399356  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:01.414266  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:01.414283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:01.482637  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:01.482650  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:01.482662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:01.550883  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:01.550905  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.081199  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:04.093060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:04.093177  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:04.125499  481598 cri.go:89] found id: ""
	I1216 04:38:04.125513  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.125521  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:04.125526  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:04.125595  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:04.151973  481598 cri.go:89] found id: ""
	I1216 04:38:04.151987  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.151994  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:04.151999  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:04.152058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:04.180246  481598 cri.go:89] found id: ""
	I1216 04:38:04.180260  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.180266  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:04.180271  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:04.180328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:04.207652  481598 cri.go:89] found id: ""
	I1216 04:38:04.207665  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.207672  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:04.207678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:04.207735  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:04.233457  481598 cri.go:89] found id: ""
	I1216 04:38:04.233470  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.233477  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:04.233483  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:04.233540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:04.259854  481598 cri.go:89] found id: ""
	I1216 04:38:04.259868  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.259875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:04.259880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:04.259941  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:04.285804  481598 cri.go:89] found id: ""
	I1216 04:38:04.285818  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.285825  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:04.285832  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:04.285843  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:04.364313  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:04.364343  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.397537  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:04.397559  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:04.466334  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:04.466358  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:04.481695  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:04.481712  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:04.549601  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:07.049858  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:07.060224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:07.060286  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:07.095538  481598 cri.go:89] found id: ""
	I1216 04:38:07.095552  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.095558  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:07.095572  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:07.095630  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:07.134098  481598 cri.go:89] found id: ""
	I1216 04:38:07.134113  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.134120  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:07.134125  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:07.134181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:07.160282  481598 cri.go:89] found id: ""
	I1216 04:38:07.160296  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.160312  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:07.160317  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:07.160375  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:07.186194  481598 cri.go:89] found id: ""
	I1216 04:38:07.186208  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.186215  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:07.186220  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:07.186277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:07.211185  481598 cri.go:89] found id: ""
	I1216 04:38:07.211198  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.211211  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:07.211216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:07.211274  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:07.236131  481598 cri.go:89] found id: ""
	I1216 04:38:07.236145  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.236171  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:07.236177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:07.236243  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:07.262438  481598 cri.go:89] found id: ""
	I1216 04:38:07.262452  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.262459  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:07.262467  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:07.262477  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:07.331225  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:07.331246  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:07.359219  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:07.359236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:07.426207  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:07.426225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:07.441345  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:07.441364  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:07.509422  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.011147  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:10.023261  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:10.023327  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:10.050971  481598 cri.go:89] found id: ""
	I1216 04:38:10.050986  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.050994  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:10.050999  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:10.051073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:10.085339  481598 cri.go:89] found id: ""
	I1216 04:38:10.085353  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.085360  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:10.085366  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:10.085434  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:10.124529  481598 cri.go:89] found id: ""
	I1216 04:38:10.124543  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.124551  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:10.124556  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:10.124624  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:10.164418  481598 cri.go:89] found id: ""
	I1216 04:38:10.164434  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.164442  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:10.164448  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:10.164517  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:10.190732  481598 cri.go:89] found id: ""
	I1216 04:38:10.190746  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.190753  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:10.190758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:10.190815  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:10.216314  481598 cri.go:89] found id: ""
	I1216 04:38:10.216339  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.216346  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:10.216352  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:10.216419  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:10.241726  481598 cri.go:89] found id: ""
	I1216 04:38:10.241747  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.241755  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:10.241768  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:10.241780  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:10.314496  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.314506  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:10.314520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:10.383929  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:10.383952  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:10.414686  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:10.414703  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:10.480296  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:10.480315  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:12.997386  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:13.013029  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:13.013152  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:13.043756  481598 cri.go:89] found id: ""
	I1216 04:38:13.043772  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.043779  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:13.043784  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:13.043841  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:13.078538  481598 cri.go:89] found id: ""
	I1216 04:38:13.078552  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.078559  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:13.078564  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:13.078625  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:13.107509  481598 cri.go:89] found id: ""
	I1216 04:38:13.107523  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.107530  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:13.107535  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:13.107590  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:13.144886  481598 cri.go:89] found id: ""
	I1216 04:38:13.144900  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.144907  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:13.144912  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:13.144967  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:13.172261  481598 cri.go:89] found id: ""
	I1216 04:38:13.172275  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.172282  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:13.172287  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:13.172346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:13.200255  481598 cri.go:89] found id: ""
	I1216 04:38:13.200270  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.200277  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:13.200282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:13.200339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:13.231840  481598 cri.go:89] found id: ""
	I1216 04:38:13.231855  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.231864  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:13.231871  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:13.231882  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:13.305140  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:13.305162  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:13.320119  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:13.320135  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:13.384652  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:13.384662  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:13.384672  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:13.452891  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:13.452913  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:15.986467  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:15.996642  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:15.996705  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:16.023730  481598 cri.go:89] found id: ""
	I1216 04:38:16.023745  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.023752  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:16.023757  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:16.023814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:16.048187  481598 cri.go:89] found id: ""
	I1216 04:38:16.048202  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.048209  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:16.048214  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:16.048270  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:16.084197  481598 cri.go:89] found id: ""
	I1216 04:38:16.084210  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.084217  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:16.084222  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:16.084279  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:16.114000  481598 cri.go:89] found id: ""
	I1216 04:38:16.114014  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.114021  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:16.114026  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:16.114095  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:16.146003  481598 cri.go:89] found id: ""
	I1216 04:38:16.146016  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.146023  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:16.146028  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:16.146085  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:16.171053  481598 cri.go:89] found id: ""
	I1216 04:38:16.171067  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.171074  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:16.171079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:16.171146  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:16.195607  481598 cri.go:89] found id: ""
	I1216 04:38:16.195621  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.195629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:16.195637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:16.195647  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:16.261510  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:16.261531  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:16.276956  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:16.276972  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:16.337904  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:16.337914  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:16.337925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:16.407434  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:16.407456  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:18.938513  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:18.948612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:18.948671  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:18.973989  481598 cri.go:89] found id: ""
	I1216 04:38:18.974004  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.974011  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:18.974016  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:18.974076  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:18.999416  481598 cri.go:89] found id: ""
	I1216 04:38:18.999430  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.999437  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:18.999442  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:18.999499  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:19.036420  481598 cri.go:89] found id: ""
	I1216 04:38:19.036433  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.036440  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:19.036444  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:19.036500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:19.063584  481598 cri.go:89] found id: ""
	I1216 04:38:19.063600  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.063617  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:19.063623  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:19.063694  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:19.099252  481598 cri.go:89] found id: ""
	I1216 04:38:19.099275  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.099283  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:19.099289  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:19.099363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:19.126285  481598 cri.go:89] found id: ""
	I1216 04:38:19.126307  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.126315  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:19.126320  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:19.126387  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:19.151707  481598 cri.go:89] found id: ""
	I1216 04:38:19.151722  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.151738  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:19.151746  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:19.151757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:19.216698  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:19.216723  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:19.231764  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:19.231783  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:19.299324  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:19.299334  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:19.299344  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:19.368556  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:19.368580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:21.906105  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:21.916147  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:21.916206  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:21.941307  481598 cri.go:89] found id: ""
	I1216 04:38:21.941321  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.941328  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:21.941333  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:21.941399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:21.966745  481598 cri.go:89] found id: ""
	I1216 04:38:21.966760  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.966767  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:21.966772  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:21.966831  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:21.996091  481598 cri.go:89] found id: ""
	I1216 04:38:21.996106  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.996113  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:21.996117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:21.996176  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:22.022731  481598 cri.go:89] found id: ""
	I1216 04:38:22.022746  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.022753  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:22.022758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:22.022820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:22.055034  481598 cri.go:89] found id: ""
	I1216 04:38:22.055048  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.055067  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:22.055072  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:22.055136  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:22.106853  481598 cri.go:89] found id: ""
	I1216 04:38:22.106868  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.106875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:22.106880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:22.106949  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:22.143371  481598 cri.go:89] found id: ""
	I1216 04:38:22.143385  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.143392  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:22.143399  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:22.143410  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:22.209056  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:22.209083  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:22.209096  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:22.276728  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:22.276748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:22.308467  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:22.308483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:22.373121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:22.373141  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:24.888068  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:24.898375  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:24.898438  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:24.922926  481598 cri.go:89] found id: ""
	I1216 04:38:24.922940  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.922953  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:24.922958  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:24.923018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:24.948274  481598 cri.go:89] found id: ""
	I1216 04:38:24.948288  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.948296  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:24.948300  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:24.948366  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:24.973866  481598 cri.go:89] found id: ""
	I1216 04:38:24.973880  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.973888  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:24.973893  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:24.973950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:24.999743  481598 cri.go:89] found id: ""
	I1216 04:38:24.999757  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.999764  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:24.999769  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:24.999827  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:25.030266  481598 cri.go:89] found id: ""
	I1216 04:38:25.030280  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.030298  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:25.030303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:25.030363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:25.055976  481598 cri.go:89] found id: ""
	I1216 04:38:25.055991  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.056008  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:25.056014  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:25.056070  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:25.096522  481598 cri.go:89] found id: ""
	I1216 04:38:25.096537  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.096553  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:25.096568  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:25.096580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:25.171632  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:25.171649  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:25.171661  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:25.239309  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:25.239330  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:25.268791  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:25.268807  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:25.345864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:25.345887  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:27.863617  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:27.874797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:27.874872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:27.904044  481598 cri.go:89] found id: ""
	I1216 04:38:27.904057  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.904064  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:27.904070  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:27.904135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:27.930157  481598 cri.go:89] found id: ""
	I1216 04:38:27.930172  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.930179  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:27.930184  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:27.930248  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:27.960176  481598 cri.go:89] found id: ""
	I1216 04:38:27.960203  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.960211  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:27.960216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:27.960287  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:27.986202  481598 cri.go:89] found id: ""
	I1216 04:38:27.986215  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.986222  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:27.986227  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:27.986284  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:28.017804  481598 cri.go:89] found id: ""
	I1216 04:38:28.017818  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.017825  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:28.017830  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:28.017899  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:28.048381  481598 cri.go:89] found id: ""
	I1216 04:38:28.048397  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.048404  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:28.048410  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:28.048469  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:28.089010  481598 cri.go:89] found id: ""
	I1216 04:38:28.089024  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.089032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:28.089040  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:28.089051  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:28.107163  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:28.107185  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:28.185125  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:28.185136  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:28.185146  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:28.253973  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:28.253993  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:28.284589  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:28.284611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:30.850377  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:30.860658  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:30.860717  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:30.885504  481598 cri.go:89] found id: ""
	I1216 04:38:30.885519  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.885526  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:30.885531  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:30.885592  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:30.910273  481598 cri.go:89] found id: ""
	I1216 04:38:30.910287  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.910294  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:30.910299  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:30.910360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:30.935120  481598 cri.go:89] found id: ""
	I1216 04:38:30.935134  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.935140  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:30.935145  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:30.935200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:30.960866  481598 cri.go:89] found id: ""
	I1216 04:38:30.960879  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.960886  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:30.960891  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:30.960947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:30.986279  481598 cri.go:89] found id: ""
	I1216 04:38:30.986294  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.986302  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:30.986306  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:30.986367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:31.014463  481598 cri.go:89] found id: ""
	I1216 04:38:31.014486  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.014493  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:31.014499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:31.014561  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:31.041177  481598 cri.go:89] found id: ""
	I1216 04:38:31.041198  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.041205  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:31.041213  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:31.041248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:31.083930  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:31.083946  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:31.155612  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:31.155632  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:31.171599  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:31.171616  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:31.238570  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:31.238580  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:31.238590  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:33.806752  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:33.816682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:33.816748  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:33.841422  481598 cri.go:89] found id: ""
	I1216 04:38:33.841437  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.841444  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:33.841449  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:33.841508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:33.866870  481598 cri.go:89] found id: ""
	I1216 04:38:33.866884  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.866891  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:33.866896  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:33.866954  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:33.892338  481598 cri.go:89] found id: ""
	I1216 04:38:33.892352  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.892360  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:33.892365  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:33.892428  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:33.920004  481598 cri.go:89] found id: ""
	I1216 04:38:33.920018  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.920025  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:33.920030  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:33.920088  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:33.950159  481598 cri.go:89] found id: ""
	I1216 04:38:33.950173  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.950180  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:33.950185  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:33.950244  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:33.976065  481598 cri.go:89] found id: ""
	I1216 04:38:33.976079  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.976086  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:33.976092  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:33.976172  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:34.001694  481598 cri.go:89] found id: ""
	I1216 04:38:34.001710  481598 logs.go:282] 0 containers: []
	W1216 04:38:34.001721  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:34.001729  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:34.001741  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:34.041633  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:34.041651  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:34.108611  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:34.108630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:34.125509  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:34.125525  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:34.196710  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:34.196735  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:34.196746  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:36.764814  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:36.774892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:36.774950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:36.800624  481598 cri.go:89] found id: ""
	I1216 04:38:36.800640  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.800647  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:36.800652  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:36.800715  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:36.826259  481598 cri.go:89] found id: ""
	I1216 04:38:36.826274  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.826281  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:36.826286  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:36.826343  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:36.852246  481598 cri.go:89] found id: ""
	I1216 04:38:36.852269  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.852277  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:36.852282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:36.852351  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:36.877659  481598 cri.go:89] found id: ""
	I1216 04:38:36.877680  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.877688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:36.877693  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:36.877752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:36.903365  481598 cri.go:89] found id: ""
	I1216 04:38:36.903379  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.903385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:36.903390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:36.903446  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:36.928313  481598 cri.go:89] found id: ""
	I1216 04:38:36.928328  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.928335  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:36.928341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:36.928399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:36.953145  481598 cri.go:89] found id: ""
	I1216 04:38:36.953158  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.953165  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:36.953172  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:36.953182  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:37.018934  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:37.018956  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:37.036483  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:37.036500  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:37.114492  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:37.114503  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:37.114514  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:37.191646  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:37.191667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:39.722033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:39.731793  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:39.731852  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:39.756808  481598 cri.go:89] found id: ""
	I1216 04:38:39.756822  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.756829  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:39.756834  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:39.756891  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:39.782419  481598 cri.go:89] found id: ""
	I1216 04:38:39.782440  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.782448  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:39.782453  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:39.782510  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:39.807545  481598 cri.go:89] found id: ""
	I1216 04:38:39.807559  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.807576  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:39.807581  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:39.807639  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:39.836801  481598 cri.go:89] found id: ""
	I1216 04:38:39.836816  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.836832  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:39.836844  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:39.836914  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:39.861851  481598 cri.go:89] found id: ""
	I1216 04:38:39.861865  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.861872  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:39.861877  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:39.861935  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:39.891116  481598 cri.go:89] found id: ""
	I1216 04:38:39.891130  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.891137  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:39.891144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:39.891200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:39.917011  481598 cri.go:89] found id: ""
	I1216 04:38:39.917026  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.917032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:39.917040  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:39.917050  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:39.983103  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:39.983124  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:39.997812  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:39.997829  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:40.072880  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:40.072890  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:40.072902  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:40.155262  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:40.155284  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:42.686177  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:42.696709  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:42.696766  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:42.723669  481598 cri.go:89] found id: ""
	I1216 04:38:42.723684  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.723691  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:42.723697  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:42.723762  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:42.751573  481598 cri.go:89] found id: ""
	I1216 04:38:42.751587  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.751594  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:42.751599  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:42.751660  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:42.777155  481598 cri.go:89] found id: ""
	I1216 04:38:42.777170  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.777177  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:42.777182  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:42.777253  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:42.802762  481598 cri.go:89] found id: ""
	I1216 04:38:42.802776  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.802783  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:42.802788  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:42.802847  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:42.828278  481598 cri.go:89] found id: ""
	I1216 04:38:42.828291  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.828299  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:42.828303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:42.828361  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:42.854186  481598 cri.go:89] found id: ""
	I1216 04:38:42.854211  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.854219  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:42.854224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:42.854281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:42.879809  481598 cri.go:89] found id: ""
	I1216 04:38:42.879822  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.879831  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:42.879839  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:42.879851  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:42.945305  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:42.945315  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:42.945326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:43.019176  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:43.019199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:43.048232  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:43.048248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:43.128355  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:43.128376  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.644135  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:45.654691  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:45.654750  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:45.682129  481598 cri.go:89] found id: ""
	I1216 04:38:45.682143  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.682151  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:45.682156  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:45.682216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:45.706956  481598 cri.go:89] found id: ""
	I1216 04:38:45.706970  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.706977  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:45.706981  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:45.707040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:45.732479  481598 cri.go:89] found id: ""
	I1216 04:38:45.732493  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.732500  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:45.732505  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:45.732563  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:45.757526  481598 cri.go:89] found id: ""
	I1216 04:38:45.757540  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.757547  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:45.757553  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:45.757610  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:45.787392  481598 cri.go:89] found id: ""
	I1216 04:38:45.787407  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.787414  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:45.787419  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:45.787481  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:45.817452  481598 cri.go:89] found id: ""
	I1216 04:38:45.817477  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.817484  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:45.817490  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:45.817549  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:45.843705  481598 cri.go:89] found id: ""
	I1216 04:38:45.843732  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.843744  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:45.843752  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:45.843762  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:45.909394  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:45.909415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.924650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:45.924667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:45.985242  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:45.985251  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:45.985262  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:46.060306  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:46.060333  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.603994  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:48.614177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:48.614238  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:48.639598  481598 cri.go:89] found id: ""
	I1216 04:38:48.639612  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.639620  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:48.639625  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:48.639685  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:48.668445  481598 cri.go:89] found id: ""
	I1216 04:38:48.668458  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.668465  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:48.668470  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:48.668525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:48.698321  481598 cri.go:89] found id: ""
	I1216 04:38:48.698336  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.698343  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:48.698348  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:48.698410  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:48.724272  481598 cri.go:89] found id: ""
	I1216 04:38:48.724286  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.724293  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:48.724298  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:48.724367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:48.748881  481598 cri.go:89] found id: ""
	I1216 04:38:48.748895  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.748902  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:48.748907  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:48.748965  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:48.773436  481598 cri.go:89] found id: ""
	I1216 04:38:48.773450  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.773456  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:48.773462  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:48.773518  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:48.798866  481598 cri.go:89] found id: ""
	I1216 04:38:48.798880  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.798887  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:48.798894  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:48.798904  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.830890  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:48.830906  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:48.897158  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:48.897179  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:48.912309  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:48.912326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:48.979282  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:48.979293  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:48.979304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:51.548916  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:51.559621  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:51.559691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:51.585186  481598 cri.go:89] found id: ""
	I1216 04:38:51.585201  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.585208  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:51.585214  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:51.585281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:51.610438  481598 cri.go:89] found id: ""
	I1216 04:38:51.610454  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.610462  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:51.610466  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:51.610523  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:51.640579  481598 cri.go:89] found id: ""
	I1216 04:38:51.640594  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.640601  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:51.640607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:51.640665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:51.667755  481598 cri.go:89] found id: ""
	I1216 04:38:51.667770  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.667778  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:51.667783  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:51.667840  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:51.697652  481598 cri.go:89] found id: ""
	I1216 04:38:51.697666  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.697673  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:51.697678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:51.697738  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:51.723173  481598 cri.go:89] found id: ""
	I1216 04:38:51.723188  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.723195  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:51.723200  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:51.723266  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:51.748836  481598 cri.go:89] found id: ""
	I1216 04:38:51.748851  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.748858  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:51.748865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:51.748876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:51.790045  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:51.790061  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:51.857688  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:51.857707  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:51.872771  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:51.872788  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:51.934401  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:51.934410  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:51.934420  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:54.502288  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:54.513093  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:54.513158  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:54.540102  481598 cri.go:89] found id: ""
	I1216 04:38:54.540116  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.540124  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:54.540129  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:54.540187  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:54.565581  481598 cri.go:89] found id: ""
	I1216 04:38:54.565597  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.565605  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:54.565609  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:54.565673  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:54.594141  481598 cri.go:89] found id: ""
	I1216 04:38:54.594155  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.594163  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:54.594167  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:54.594229  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:54.620437  481598 cri.go:89] found id: ""
	I1216 04:38:54.620451  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.620459  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:54.620464  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:54.620521  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:54.651777  481598 cri.go:89] found id: ""
	I1216 04:38:54.651792  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.651800  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:54.651805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:54.651862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:54.677522  481598 cri.go:89] found id: ""
	I1216 04:38:54.677536  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.677544  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:54.677549  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:54.677608  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:54.702758  481598 cri.go:89] found id: ""
	I1216 04:38:54.702774  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.702782  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:54.702789  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:54.702800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:54.731468  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:54.731485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:54.801713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:54.801732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:54.816784  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:54.816800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:54.890418  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:54.890428  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:54.890439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:57.462843  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:57.473005  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:57.473096  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:57.498656  481598 cri.go:89] found id: ""
	I1216 04:38:57.498670  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.498676  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:57.498682  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:57.498740  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:57.524589  481598 cri.go:89] found id: ""
	I1216 04:38:57.524604  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.524611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:57.524616  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:57.524683  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:57.549819  481598 cri.go:89] found id: ""
	I1216 04:38:57.549833  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.549844  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:57.549849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:57.549906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:57.580220  481598 cri.go:89] found id: ""
	I1216 04:38:57.580234  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.580241  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:57.580246  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:57.580303  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:57.605587  481598 cri.go:89] found id: ""
	I1216 04:38:57.605600  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.605607  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:57.605612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:57.605668  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:57.630691  481598 cri.go:89] found id: ""
	I1216 04:38:57.630706  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.630721  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:57.630726  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:57.630784  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:57.655557  481598 cri.go:89] found id: ""
	I1216 04:38:57.655571  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.655579  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:57.655588  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:57.655598  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:57.686872  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:57.686888  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:57.752402  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:57.752422  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:57.767423  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:57.767439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:57.831611  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:57.831621  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:57.831631  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.403298  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:00.416780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:00.416848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:00.445602  481598 cri.go:89] found id: ""
	I1216 04:39:00.445618  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.445626  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:00.445632  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:00.445698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:00.480454  481598 cri.go:89] found id: ""
	I1216 04:39:00.480470  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.480478  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:00.480483  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:00.480548  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:00.509654  481598 cri.go:89] found id: ""
	I1216 04:39:00.509669  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.509677  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:00.509682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:00.509746  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:00.539666  481598 cri.go:89] found id: ""
	I1216 04:39:00.539681  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.539688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:00.539694  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:00.539755  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:00.567301  481598 cri.go:89] found id: ""
	I1216 04:39:00.567316  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.567323  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:00.567328  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:00.567388  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:00.593431  481598 cri.go:89] found id: ""
	I1216 04:39:00.593446  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.593453  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:00.593458  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:00.593526  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:00.618713  481598 cri.go:89] found id: ""
	I1216 04:39:00.618728  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.618736  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:00.618743  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:00.618754  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:00.687858  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:00.687869  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:00.687880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.757046  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:00.757071  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:00.784949  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:00.784966  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:00.850312  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:00.850331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.365582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:03.376104  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:03.376164  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:03.402517  481598 cri.go:89] found id: ""
	I1216 04:39:03.402532  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.402539  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:03.402544  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:03.402605  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:03.428281  481598 cri.go:89] found id: ""
	I1216 04:39:03.428295  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.428302  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:03.428308  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:03.428365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:03.456250  481598 cri.go:89] found id: ""
	I1216 04:39:03.456267  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.456274  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:03.456280  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:03.456353  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:03.482051  481598 cri.go:89] found id: ""
	I1216 04:39:03.482064  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.482071  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:03.482077  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:03.482137  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:03.511578  481598 cri.go:89] found id: ""
	I1216 04:39:03.511594  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.511601  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:03.511606  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:03.511664  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:03.540839  481598 cri.go:89] found id: ""
	I1216 04:39:03.540853  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.540860  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:03.540866  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:03.540921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:03.567087  481598 cri.go:89] found id: ""
	I1216 04:39:03.567103  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.567111  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:03.567119  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:03.567131  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:03.633316  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:03.633338  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.648697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:03.648714  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:03.714118  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:03.714128  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:03.714140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:03.784197  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:03.784219  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.317384  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:06.328685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:06.328743  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:06.355872  481598 cri.go:89] found id: ""
	I1216 04:39:06.355887  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.355893  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:06.355907  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:06.355964  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:06.386605  481598 cri.go:89] found id: ""
	I1216 04:39:06.386619  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.386626  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:06.386631  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:06.386696  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:06.412102  481598 cri.go:89] found id: ""
	I1216 04:39:06.412117  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.412132  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:06.412137  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:06.412209  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:06.437654  481598 cri.go:89] found id: ""
	I1216 04:39:06.437669  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.437676  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:06.437681  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:06.437752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:06.466130  481598 cri.go:89] found id: ""
	I1216 04:39:06.466145  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.466151  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:06.466156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:06.466219  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:06.491149  481598 cri.go:89] found id: ""
	I1216 04:39:06.491163  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.491170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:06.491176  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:06.491236  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:06.517113  481598 cri.go:89] found id: ""
	I1216 04:39:06.517127  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.517134  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:06.517141  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:06.517165  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:06.532219  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:06.532236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:06.610459  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:06.610469  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:06.610480  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:06.678489  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:06.678509  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.713694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:06.713710  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.281978  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:09.291972  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:09.292040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:09.318986  481598 cri.go:89] found id: ""
	I1216 04:39:09.319002  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.319009  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:09.319014  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:09.319080  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:09.355810  481598 cri.go:89] found id: ""
	I1216 04:39:09.355823  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.355848  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:09.355853  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:09.355917  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:09.386910  481598 cri.go:89] found id: ""
	I1216 04:39:09.386939  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.386946  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:09.386951  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:09.387019  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:09.415820  481598 cri.go:89] found id: ""
	I1216 04:39:09.415834  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.415841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:09.415846  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:09.415902  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:09.441866  481598 cri.go:89] found id: ""
	I1216 04:39:09.441881  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.441888  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:09.441892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:09.441956  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:09.467703  481598 cri.go:89] found id: ""
	I1216 04:39:09.467718  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.467724  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:09.467730  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:09.467790  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:09.494307  481598 cri.go:89] found id: ""
	I1216 04:39:09.494322  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.494329  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:09.494336  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:09.494346  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:09.521531  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:09.521549  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.587441  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:09.587464  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:09.602275  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:09.602291  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:09.664727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:09.664737  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:09.664748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.233947  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:12.245865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:12.245923  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:12.270410  481598 cri.go:89] found id: ""
	I1216 04:39:12.270425  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.270431  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:12.270437  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:12.270513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:12.295309  481598 cri.go:89] found id: ""
	I1216 04:39:12.295323  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.295330  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:12.295334  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:12.295391  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:12.326327  481598 cri.go:89] found id: ""
	I1216 04:39:12.326342  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.326349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:12.326354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:12.326415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:12.358181  481598 cri.go:89] found id: ""
	I1216 04:39:12.358196  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.358203  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:12.358208  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:12.358309  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:12.390281  481598 cri.go:89] found id: ""
	I1216 04:39:12.390296  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.390303  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:12.390308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:12.390365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:12.419429  481598 cri.go:89] found id: ""
	I1216 04:39:12.419444  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.419451  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:12.419456  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:12.419512  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:12.445137  481598 cri.go:89] found id: ""
	I1216 04:39:12.445151  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.445159  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:12.445167  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:12.445177  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:12.510786  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:12.510805  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:12.525785  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:12.525801  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:12.590602  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:12.590616  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:12.590627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.664304  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:12.664331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:15.192618  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:15.202786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:15.202855  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:15.227787  481598 cri.go:89] found id: ""
	I1216 04:39:15.227801  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.227808  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:15.227813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:15.227875  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:15.254490  481598 cri.go:89] found id: ""
	I1216 04:39:15.254505  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.254512  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:15.254517  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:15.254578  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:15.280037  481598 cri.go:89] found id: ""
	I1216 04:39:15.280052  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.280060  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:15.280064  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:15.280124  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:15.306278  481598 cri.go:89] found id: ""
	I1216 04:39:15.306295  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.306303  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:15.306308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:15.306368  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:15.338132  481598 cri.go:89] found id: ""
	I1216 04:39:15.338146  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.338152  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:15.338157  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:15.338215  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:15.365557  481598 cri.go:89] found id: ""
	I1216 04:39:15.365571  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.365578  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:15.365583  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:15.365640  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:15.394440  481598 cri.go:89] found id: ""
	I1216 04:39:15.394454  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.394461  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:15.394469  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:15.394478  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:15.460219  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:15.460240  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:15.475344  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:15.475362  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:15.543524  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:15.543542  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:15.543552  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:15.611736  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:15.611757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:18.147208  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:18.157570  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:18.157629  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:18.182325  481598 cri.go:89] found id: ""
	I1216 04:39:18.182339  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.182346  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:18.182351  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:18.182409  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:18.211344  481598 cri.go:89] found id: ""
	I1216 04:39:18.211358  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.211365  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:18.211370  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:18.211430  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:18.236501  481598 cri.go:89] found id: ""
	I1216 04:39:18.236518  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.236525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:18.236533  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:18.236600  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:18.261000  481598 cri.go:89] found id: ""
	I1216 04:39:18.261013  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.261020  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:18.261025  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:18.261112  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:18.286887  481598 cri.go:89] found id: ""
	I1216 04:39:18.286901  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.286908  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:18.286913  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:18.286970  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:18.311492  481598 cri.go:89] found id: ""
	I1216 04:39:18.311506  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.311514  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:18.311519  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:18.311577  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:18.350624  481598 cri.go:89] found id: ""
	I1216 04:39:18.350638  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.350645  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:18.350652  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:18.350663  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:18.424437  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:18.424461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:18.439409  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:18.439425  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:18.503408  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:18.503426  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:18.503439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:18.572236  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:18.572256  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.099923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:21.109895  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:21.109959  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:21.135095  481598 cri.go:89] found id: ""
	I1216 04:39:21.135110  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.135117  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:21.135122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:21.135188  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:21.159978  481598 cri.go:89] found id: ""
	I1216 04:39:21.159991  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.159998  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:21.160002  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:21.160060  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:21.184861  481598 cri.go:89] found id: ""
	I1216 04:39:21.184875  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.184882  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:21.184887  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:21.184943  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:21.215362  481598 cri.go:89] found id: ""
	I1216 04:39:21.215376  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.215383  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:21.215388  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:21.215451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:21.241352  481598 cri.go:89] found id: ""
	I1216 04:39:21.241366  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.241373  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:21.241378  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:21.241435  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:21.270124  481598 cri.go:89] found id: ""
	I1216 04:39:21.270139  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.270146  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:21.270151  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:21.270210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:21.294836  481598 cri.go:89] found id: ""
	I1216 04:39:21.294850  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.294857  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:21.294865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:21.294876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.340249  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:21.340265  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:21.415950  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:21.415975  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:21.431603  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:21.431619  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:21.496240  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:21.496250  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:21.496260  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.064476  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:24.075218  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:24.075282  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:24.100790  481598 cri.go:89] found id: ""
	I1216 04:39:24.100804  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.100810  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:24.100815  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:24.100870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:24.127285  481598 cri.go:89] found id: ""
	I1216 04:39:24.127301  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.127308  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:24.127312  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:24.127371  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:24.156427  481598 cri.go:89] found id: ""
	I1216 04:39:24.156440  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.156447  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:24.156452  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:24.156513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:24.182130  481598 cri.go:89] found id: ""
	I1216 04:39:24.182146  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.182154  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:24.182159  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:24.182216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:24.207363  481598 cri.go:89] found id: ""
	I1216 04:39:24.207378  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.207385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:24.207390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:24.207451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:24.235986  481598 cri.go:89] found id: ""
	I1216 04:39:24.236001  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.236017  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:24.236022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:24.236077  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:24.260561  481598 cri.go:89] found id: ""
	I1216 04:39:24.260582  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.260589  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:24.260597  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:24.260608  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.328717  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:24.328738  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:24.362340  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:24.362357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:24.435463  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:24.435483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:24.452196  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:24.452212  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:24.517484  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.018375  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:27.028921  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:27.028982  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:27.058968  481598 cri.go:89] found id: ""
	I1216 04:39:27.058984  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.058991  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:27.058996  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:27.059058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:27.086788  481598 cri.go:89] found id: ""
	I1216 04:39:27.086802  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.086808  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:27.086815  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:27.086872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:27.111593  481598 cri.go:89] found id: ""
	I1216 04:39:27.111607  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.111629  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:27.111635  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:27.111700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:27.135786  481598 cri.go:89] found id: ""
	I1216 04:39:27.135800  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.135816  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:27.135822  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:27.135881  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:27.175564  481598 cri.go:89] found id: ""
	I1216 04:39:27.175577  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.175593  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:27.175598  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:27.175670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:27.201020  481598 cri.go:89] found id: ""
	I1216 04:39:27.201034  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.201041  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:27.201048  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:27.201123  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:27.226608  481598 cri.go:89] found id: ""
	I1216 04:39:27.226622  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.226629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:27.226637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:27.226648  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:27.292121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:27.292140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:27.307824  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:27.307840  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:27.382707  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.382717  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:27.382728  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:27.450745  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:27.450764  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:29.981824  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:29.991752  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:29.991812  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:30.027720  481598 cri.go:89] found id: ""
	I1216 04:39:30.027737  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.027744  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:30.027749  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:30.027824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:30.064834  481598 cri.go:89] found id: ""
	I1216 04:39:30.064862  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.064869  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:30.064875  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:30.064942  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:30.092327  481598 cri.go:89] found id: ""
	I1216 04:39:30.092341  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.092349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:30.092354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:30.092415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:30.119568  481598 cri.go:89] found id: ""
	I1216 04:39:30.119583  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.119590  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:30.119595  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:30.119654  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:30.145948  481598 cri.go:89] found id: ""
	I1216 04:39:30.145962  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.145970  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:30.145974  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:30.146037  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:30.174055  481598 cri.go:89] found id: ""
	I1216 04:39:30.174069  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.174077  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:30.174082  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:30.174148  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:30.200676  481598 cri.go:89] found id: ""
	I1216 04:39:30.200704  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.200711  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:30.200719  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:30.200729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:30.273177  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:30.273199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:30.307730  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:30.307749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:30.380128  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:30.380149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:30.398650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:30.398668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:30.464666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:32.965244  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:32.975770  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:32.975829  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:33.008069  481598 cri.go:89] found id: ""
	I1216 04:39:33.008086  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.008094  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:33.008099  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:33.008180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:33.035228  481598 cri.go:89] found id: ""
	I1216 04:39:33.035242  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.035249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:33.035254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:33.035319  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:33.062504  481598 cri.go:89] found id: ""
	I1216 04:39:33.062518  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.062525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:33.062530  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:33.062588  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:33.088441  481598 cri.go:89] found id: ""
	I1216 04:39:33.088455  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.088462  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:33.088467  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:33.088529  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:33.119260  481598 cri.go:89] found id: ""
	I1216 04:39:33.119274  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.119281  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:33.119286  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:33.119346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:33.150552  481598 cri.go:89] found id: ""
	I1216 04:39:33.150567  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.150575  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:33.150580  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:33.150644  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:33.180001  481598 cri.go:89] found id: ""
	I1216 04:39:33.180016  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.180023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:33.180030  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:33.180040  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:33.248727  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:33.248752  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:33.277683  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:33.277700  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:33.350702  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:33.350721  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:33.369208  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:33.369248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:33.439765  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:35.940031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:35.950049  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:35.950107  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:35.975196  481598 cri.go:89] found id: ""
	I1216 04:39:35.975209  481598 logs.go:282] 0 containers: []
	W1216 04:39:35.975216  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:35.975221  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:35.975277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:36.001797  481598 cri.go:89] found id: ""
	I1216 04:39:36.001812  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.001820  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:36.001826  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:36.001890  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:36.036431  481598 cri.go:89] found id: ""
	I1216 04:39:36.036446  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.036454  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:36.036459  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:36.036525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:36.063963  481598 cri.go:89] found id: ""
	I1216 04:39:36.063978  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.063985  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:36.063990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:36.064048  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:36.090639  481598 cri.go:89] found id: ""
	I1216 04:39:36.090653  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.090660  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:36.090665  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:36.090724  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:36.116793  481598 cri.go:89] found id: ""
	I1216 04:39:36.116807  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.116816  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:36.116821  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:36.116880  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:36.141959  481598 cri.go:89] found id: ""
	I1216 04:39:36.141972  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.141979  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:36.141986  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:36.141996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:36.208976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:36.208996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:36.239530  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:36.239546  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:36.305220  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:36.305245  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:36.322139  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:36.322169  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:36.399936  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:38.900194  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:38.910569  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:38.910632  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:38.936840  481598 cri.go:89] found id: ""
	I1216 04:39:38.936854  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.936861  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:38.936867  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:38.936926  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:38.969994  481598 cri.go:89] found id: ""
	I1216 04:39:38.970008  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.970016  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:38.970021  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:38.970092  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:39.000246  481598 cri.go:89] found id: ""
	I1216 04:39:39.000260  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.000267  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:39.000272  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:39.000328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:39.028053  481598 cri.go:89] found id: ""
	I1216 04:39:39.028068  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.028075  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:39.028080  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:39.028139  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:39.053044  481598 cri.go:89] found id: ""
	I1216 04:39:39.053058  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.053100  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:39.053107  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:39.053165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:39.078212  481598 cri.go:89] found id: ""
	I1216 04:39:39.078226  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.078234  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:39.078239  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:39.078296  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:39.103968  481598 cri.go:89] found id: ""
	I1216 04:39:39.103982  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.103994  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:39.104001  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:39.104011  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:39.171261  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:39.171283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:39.203918  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:39.203937  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:39.269162  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:39.269183  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:39.283640  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:39.283658  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:39.357490  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:41.857783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:41.868156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:41.868218  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:41.896097  481598 cri.go:89] found id: ""
	I1216 04:39:41.896111  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.896118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:41.896123  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:41.896183  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:41.923730  481598 cri.go:89] found id: ""
	I1216 04:39:41.923745  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.923752  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:41.923758  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:41.923814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:41.948996  481598 cri.go:89] found id: ""
	I1216 04:39:41.949010  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.949017  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:41.949022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:41.949098  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:41.973820  481598 cri.go:89] found id: ""
	I1216 04:39:41.973834  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.973841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:41.973845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:41.973901  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:41.999809  481598 cri.go:89] found id: ""
	I1216 04:39:41.999832  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.999839  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:41.999845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:41.999910  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:42.032190  481598 cri.go:89] found id: ""
	I1216 04:39:42.032216  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.032224  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:42.032229  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:42.032301  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:42.059655  481598 cri.go:89] found id: ""
	I1216 04:39:42.059679  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.059687  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:42.059694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:42.059705  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:42.127853  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:42.127875  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:42.146370  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:42.146393  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:42.223415  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:42.223444  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:42.223457  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:42.304338  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:42.304368  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:44.847911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:44.858741  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:44.858820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:44.884095  481598 cri.go:89] found id: ""
	I1216 04:39:44.884110  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.884118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:44.884122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:44.884181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:44.911877  481598 cri.go:89] found id: ""
	I1216 04:39:44.911891  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.911898  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:44.911902  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:44.911960  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:44.938117  481598 cri.go:89] found id: ""
	I1216 04:39:44.938132  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.938139  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:44.938144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:44.938204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:44.972779  481598 cri.go:89] found id: ""
	I1216 04:39:44.972793  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.972800  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:44.972805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:44.972862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:45.000033  481598 cri.go:89] found id: ""
	I1216 04:39:45.000047  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.000054  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:45.000060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:45.000121  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:45.072214  481598 cri.go:89] found id: ""
	I1216 04:39:45.072234  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.072244  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:45.072250  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:45.072325  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:45.112612  481598 cri.go:89] found id: ""
	I1216 04:39:45.112632  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.112641  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:45.112653  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:45.112668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:45.193381  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:45.193407  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:45.244205  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:45.244225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:45.324983  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:45.325004  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:45.340857  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:45.340880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:45.423270  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:47.923526  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:47.933779  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:47.933853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:47.960777  481598 cri.go:89] found id: ""
	I1216 04:39:47.960793  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.960800  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:47.960804  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:47.960863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:47.990010  481598 cri.go:89] found id: ""
	I1216 04:39:47.990024  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.990031  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:47.990036  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:47.990094  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:48.021881  481598 cri.go:89] found id: ""
	I1216 04:39:48.021897  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.021908  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:48.021914  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:48.021978  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:48.048841  481598 cri.go:89] found id: ""
	I1216 04:39:48.048860  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.048867  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:48.048872  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:48.048947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:48.074988  481598 cri.go:89] found id: ""
	I1216 04:39:48.075002  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.075010  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:48.075015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:48.075073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:48.101288  481598 cri.go:89] found id: ""
	I1216 04:39:48.101303  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.101320  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:48.101325  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:48.101383  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:48.126469  481598 cri.go:89] found id: ""
	I1216 04:39:48.126483  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.126489  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:48.126497  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:48.126508  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:48.160206  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:48.160222  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:48.226864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:48.226883  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:48.241861  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:48.241879  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:48.311183  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:48.311197  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:48.311208  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:50.890106  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:50.900561  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:50.900623  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:50.925477  481598 cri.go:89] found id: ""
	I1216 04:39:50.925491  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.925498  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:50.925503  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:50.925573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:50.950590  481598 cri.go:89] found id: ""
	I1216 04:39:50.950604  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.950611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:50.950615  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:50.950670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:50.975563  481598 cri.go:89] found id: ""
	I1216 04:39:50.975577  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.975584  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:50.975588  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:50.975649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:51.001446  481598 cri.go:89] found id: ""
	I1216 04:39:51.001460  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.001468  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:51.001473  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:51.001546  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:51.036808  481598 cri.go:89] found id: ""
	I1216 04:39:51.036822  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.036830  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:51.036834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:51.036893  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:51.063122  481598 cri.go:89] found id: ""
	I1216 04:39:51.063136  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.063143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:51.063148  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:51.063204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:51.091909  481598 cri.go:89] found id: ""
	I1216 04:39:51.091924  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.091931  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:51.091938  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:51.091949  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:51.157330  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:51.157357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:51.172521  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:51.172537  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:51.237104  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:51.237115  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:51.237126  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:51.310463  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:51.310484  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:53.856519  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:53.866849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:53.866907  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:53.892183  481598 cri.go:89] found id: ""
	I1216 04:39:53.892197  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.892204  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:53.892210  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:53.892269  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:53.917961  481598 cri.go:89] found id: ""
	I1216 04:39:53.917975  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.917983  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:53.917987  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:53.918046  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:53.943214  481598 cri.go:89] found id: ""
	I1216 04:39:53.943228  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.943235  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:53.943240  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:53.943298  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:53.968696  481598 cri.go:89] found id: ""
	I1216 04:39:53.968710  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.968717  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:53.968722  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:53.968778  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:53.993878  481598 cri.go:89] found id: ""
	I1216 04:39:53.993892  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.993900  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:53.993905  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:53.993961  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:54.021892  481598 cri.go:89] found id: ""
	I1216 04:39:54.021911  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.021918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:54.021924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:54.021989  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:54.048339  481598 cri.go:89] found id: ""
	I1216 04:39:54.048353  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.048360  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:54.048368  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:54.048379  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:54.115518  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:54.115529  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:54.115540  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:54.184110  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:54.184130  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:54.212611  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:54.212627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:54.280294  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:54.280314  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:56.795621  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:56.805834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:56.805904  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:56.831835  481598 cri.go:89] found id: ""
	I1216 04:39:56.831850  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.831857  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:56.831862  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:56.831920  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:56.857986  481598 cri.go:89] found id: ""
	I1216 04:39:56.858000  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.858007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:56.858012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:56.858086  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:56.884049  481598 cri.go:89] found id: ""
	I1216 04:39:56.884062  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.884069  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:56.884074  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:56.884129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:56.909467  481598 cri.go:89] found id: ""
	I1216 04:39:56.909481  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.909488  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:56.909493  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:56.909553  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:56.935361  481598 cri.go:89] found id: ""
	I1216 04:39:56.935375  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.935382  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:56.935387  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:56.935444  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:56.963724  481598 cri.go:89] found id: ""
	I1216 04:39:56.963738  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.963745  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:56.963750  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:56.963807  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:56.988482  481598 cri.go:89] found id: ""
	I1216 04:39:56.988495  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.988502  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:56.988510  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:56.988520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:57.057566  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:57.057587  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:57.073142  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:57.073160  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:57.138961  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:57.138972  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:57.138983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:57.206475  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:57.206497  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:59.739022  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:59.749638  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:59.749700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:59.776094  481598 cri.go:89] found id: ""
	I1216 04:39:59.776109  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.776115  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:59.776120  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:59.776180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:59.802606  481598 cri.go:89] found id: ""
	I1216 04:39:59.802621  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.802628  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:59.802634  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:59.802697  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:59.829710  481598 cri.go:89] found id: ""
	I1216 04:39:59.829724  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.829731  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:59.829736  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:59.829808  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:59.859658  481598 cri.go:89] found id: ""
	I1216 04:39:59.859673  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.859680  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:59.859685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:59.859742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:59.884817  481598 cri.go:89] found id: ""
	I1216 04:39:59.884831  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.884838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:59.884843  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:59.884906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:59.911195  481598 cri.go:89] found id: ""
	I1216 04:39:59.911210  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.911217  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:59.911223  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:59.911283  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:59.936870  481598 cri.go:89] found id: ""
	I1216 04:39:59.936885  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.936891  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:59.936899  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:59.936909  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:00.003032  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:00.003054  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:00.086753  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:00.086772  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:00.242338  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:00.242351  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:00.242395  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:00.380976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:00.381000  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:02.964729  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:02.974990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:02.975051  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:03.001443  481598 cri.go:89] found id: ""
	I1216 04:40:03.001458  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.001466  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:03.001471  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:03.001538  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:03.030227  481598 cri.go:89] found id: ""
	I1216 04:40:03.030241  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.030249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:03.030254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:03.030315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:03.056406  481598 cri.go:89] found id: ""
	I1216 04:40:03.056421  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.056429  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:03.056439  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:03.056500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:03.084430  481598 cri.go:89] found id: ""
	I1216 04:40:03.084452  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.084460  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:03.084465  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:03.084527  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:03.112058  481598 cri.go:89] found id: ""
	I1216 04:40:03.112072  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.112079  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:03.112084  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:03.112150  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:03.139147  481598 cri.go:89] found id: ""
	I1216 04:40:03.139161  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.139168  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:03.139173  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:03.139231  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:03.170943  481598 cri.go:89] found id: ""
	I1216 04:40:03.170958  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.170965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:03.170973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:03.170984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:03.237388  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:03.237409  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:03.252191  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:03.252213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:03.315123  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:03.315132  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:03.315143  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:03.388848  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:03.388869  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:05.923315  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:05.934216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:05.934292  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:05.964778  481598 cri.go:89] found id: ""
	I1216 04:40:05.964791  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.964798  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:05.964813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:05.964895  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:05.991403  481598 cri.go:89] found id: ""
	I1216 04:40:05.991417  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.991424  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:05.991429  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:05.991486  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:06.019838  481598 cri.go:89] found id: ""
	I1216 04:40:06.019853  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.019860  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:06.019865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:06.019927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:06.046554  481598 cri.go:89] found id: ""
	I1216 04:40:06.046569  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.046580  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:06.046585  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:06.046649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:06.071958  481598 cri.go:89] found id: ""
	I1216 04:40:06.071973  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.071980  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:06.071985  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:06.072040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:06.099079  481598 cri.go:89] found id: ""
	I1216 04:40:06.099094  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.099101  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:06.099106  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:06.099170  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:06.126168  481598 cri.go:89] found id: ""
	I1216 04:40:06.126188  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.126195  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:06.126202  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:06.126213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:06.192591  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:06.192611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:06.207708  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:06.207729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:06.274064  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:06.274074  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:06.274086  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:06.343044  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:06.343066  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:08.873218  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:08.883654  481598 kubeadm.go:602] duration metric: took 4m3.325303057s to restartPrimaryControlPlane
	W1216 04:40:08.883714  481598 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 04:40:08.883788  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:40:09.294329  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:40:09.307484  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:40:09.315713  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:40:09.315769  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:40:09.323612  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:40:09.323622  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:40:09.323675  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:40:09.331783  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:40:09.331838  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:40:09.339284  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:40:09.346837  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:40:09.346891  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:40:09.354493  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.362269  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:40:09.362328  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.369970  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:40:09.378044  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:40:09.378103  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:40:09.385765  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:40:09.424060  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:40:09.424358  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:40:09.495076  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:40:09.495141  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:40:09.495181  481598 kubeadm.go:319] OS: Linux
	I1216 04:40:09.495224  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:40:09.495271  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:40:09.495318  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:40:09.495365  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:40:09.495412  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:40:09.495459  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:40:09.495502  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:40:09.495550  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:40:09.495596  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:40:09.563458  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:40:09.563582  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:40:09.563682  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:40:09.571744  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:40:09.577424  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:40:09.577526  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:40:09.577597  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:40:09.577679  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:40:09.577744  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:40:09.577819  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:40:09.577878  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:40:09.577951  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:40:09.578022  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:40:09.578105  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:40:09.578188  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:40:09.578235  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:40:09.578291  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:40:09.899760  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:40:10.102481  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:40:10.266020  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:40:10.669469  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:40:11.526452  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:40:11.527018  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:40:11.530635  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:40:11.533764  481598 out.go:252]   - Booting up control plane ...
	I1216 04:40:11.533860  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:40:11.533937  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:40:11.534462  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:40:11.549423  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:40:11.549689  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:40:11.557342  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:40:11.557601  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:40:11.557642  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:40:11.689632  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:40:11.689752  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:44:11.687962  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001213504s
	I1216 04:44:11.687985  481598 kubeadm.go:319] 
	I1216 04:44:11.688045  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:44:11.688077  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:44:11.688181  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:44:11.688185  481598 kubeadm.go:319] 
	I1216 04:44:11.688293  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:44:11.688324  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:44:11.688354  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:44:11.688357  481598 kubeadm.go:319] 
	I1216 04:44:11.693131  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:11.693558  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:11.693669  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:44:11.693904  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:44:11.693910  481598 kubeadm.go:319] 
	I1216 04:44:11.693977  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:44:11.694089  481598 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001213504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:44:11.694190  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:44:12.104466  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:44:12.116829  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:44:12.116881  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:44:12.124364  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:44:12.124372  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:44:12.124420  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:44:12.131751  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:44:12.131807  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:44:12.138938  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:44:12.146429  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:44:12.146482  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:44:12.153782  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.161218  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:44:12.161270  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.168781  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:44:12.176219  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:44:12.176271  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:44:12.183435  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:44:12.295783  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:12.296200  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:12.361811  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:48:14.074988  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 04:48:14.075012  481598 kubeadm.go:319] 
	I1216 04:48:14.075081  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:48:14.079141  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:48:14.079195  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:48:14.079284  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:48:14.079338  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:48:14.079372  481598 kubeadm.go:319] OS: Linux
	I1216 04:48:14.079416  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:48:14.079463  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:48:14.079508  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:48:14.079555  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:48:14.079602  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:48:14.079664  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:48:14.079709  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:48:14.079755  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:48:14.079801  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:48:14.079872  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:48:14.079966  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:48:14.080055  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:48:14.080117  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:48:14.083166  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:48:14.083255  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:48:14.083327  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:48:14.083402  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:48:14.083461  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:48:14.083529  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:48:14.083582  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:48:14.083644  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:48:14.083704  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:48:14.083778  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:48:14.083849  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:48:14.083886  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:48:14.083941  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:48:14.083991  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:48:14.084046  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:48:14.084103  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:48:14.084165  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:48:14.084218  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:48:14.084301  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:48:14.084366  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:48:14.087214  481598 out.go:252]   - Booting up control plane ...
	I1216 04:48:14.087326  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:48:14.087404  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:48:14.087497  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:48:14.087610  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:48:14.087707  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:48:14.087811  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:48:14.087895  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:48:14.087932  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:48:14.088082  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:48:14.088189  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:48:14.088268  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077674s
	I1216 04:48:14.088271  481598 kubeadm.go:319] 
	I1216 04:48:14.088334  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:48:14.088366  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:48:14.088482  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:48:14.088486  481598 kubeadm.go:319] 
	I1216 04:48:14.088595  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:48:14.088637  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:48:14.088668  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:48:14.088677  481598 kubeadm.go:319] 
	I1216 04:48:14.088733  481598 kubeadm.go:403] duration metric: took 12m8.569239535s to StartCluster
	I1216 04:48:14.088763  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:48:14.088824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:48:14.121113  481598 cri.go:89] found id: ""
	I1216 04:48:14.121140  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.121148  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:48:14.121153  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:48:14.121210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:48:14.150916  481598 cri.go:89] found id: ""
	I1216 04:48:14.150931  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.150938  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:48:14.150943  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:48:14.151005  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:48:14.177693  481598 cri.go:89] found id: ""
	I1216 04:48:14.177709  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.177716  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:48:14.177721  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:48:14.177782  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:48:14.202900  481598 cri.go:89] found id: ""
	I1216 04:48:14.202914  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.202921  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:48:14.202926  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:48:14.202983  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:48:14.229346  481598 cri.go:89] found id: ""
	I1216 04:48:14.229360  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.229367  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:48:14.229372  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:48:14.229433  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:48:14.255869  481598 cri.go:89] found id: ""
	I1216 04:48:14.255884  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.255891  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:48:14.255896  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:48:14.255953  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:48:14.282757  481598 cri.go:89] found id: ""
	I1216 04:48:14.282772  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.282779  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:48:14.282787  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:48:14.282797  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:48:14.349482  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:48:14.349503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:48:14.364748  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:48:14.364765  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:48:14.440728  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:48:14.440741  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:48:14.440751  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:48:14.515072  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:48:14.515092  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:48:14.544694  481598 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:48:14.544736  481598 out.go:285] * 
	W1216 04:48:14.544844  481598 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.544900  481598 out.go:285] * 
	W1216 04:48:14.547108  481598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:48:14.553105  481598 out.go:203] 
	W1216 04:48:14.555966  481598 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.556016  481598 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:48:14.556038  481598 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:48:14.559052  481598 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714709668Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.71475743Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714823679Z" level=info msg="Create NRI interface"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714952197Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714978487Z" level=info msg="runtime interface created"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714994996Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715003956Z" level=info msg="runtime interface starting up..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715015205Z" level=info msg="starting plugins..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715027849Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715097331Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:36:03 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.566937768Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5b381738-c32a-40c6-affb-c4aad9d726b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.567803155Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=7302f23d-29b3-4ddc-ad63-9af170663562 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.568336568Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=470a4814-2c77-4f21-97ca-d4b2d8b367c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.56886276Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3d63019-6956-4b8d-9795-5e45ed470016 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.569572699Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1715eb88-0ece-47e1-8cf4-08ec329b9548 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570118822Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=17ac1632-ceef-4623-82d4-95709ece00f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570664255Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=9e736680-8e53-4709-9714-232fbfa617ef name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.365457664Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=66aba16f-2286-4957-9589-3f6b308f0653 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366373784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a0b09546-fe1b-440e-8076-598a1e2930d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366892723Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=976ba277-fbb2-4db1-8ee0-ce87f329b2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367464412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15d708f7-0c1f-4e61-bde7-afc75b1dc430 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367935941Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d28f296-8f48-4bb2-bf27-13281f9a3b27 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368429435Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=82541142-23b6-4f48-816e-5b740356cd35 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368875848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=29b0dee6-8ec8-4ecc-822d-bf19bcc0e034 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:48:15.809974   21238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:15.810857   21238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:15.812571   21238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:15.813269   21238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:15.814752   21238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:48:15 up  3:30,  0 user,  load average: 0.27, 0.21, 0.46
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:48:13 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:48:14 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 16 04:48:14 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:14 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:14 functional-763073 kubelet[21047]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:14 functional-763073 kubelet[21047]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:14 functional-763073 kubelet[21047]: E1216 04:48:14.136699   21047 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:48:14 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:48:14 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:48:14 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 16 04:48:14 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:14 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:14 functional-763073 kubelet[21141]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:14 functional-763073 kubelet[21141]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:14 functional-763073 kubelet[21141]: E1216 04:48:14.889959   21141 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:48:14 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:48:14 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:48:15 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 16 04:48:15 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:15 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:15 functional-763073 kubelet[21186]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:15 functional-763073 kubelet[21186]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:15 functional-763073 kubelet[21186]: E1216 04:48:15.637754   21186 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:48:15 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:48:15 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (384.804251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-763073 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-763073 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (57.84583ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-763073 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (337.742537ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-861171 image ls --format json --alsologtostderr                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls --format table --alsologtostderr                                                                                       │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ update-context │ functional-861171 update-context --alsologtostderr -v=2                                                                                           │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ image          │ functional-861171 image ls                                                                                                                        │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ delete         │ -p functional-861171                                                                                                                              │ functional-861171 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │ 16 Dec 25 04:21 UTC │
	│ start          │ -p functional-763073 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:21 UTC │                     │
	│ start          │ -p functional-763073 --alsologtostderr -v=8                                                                                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │                     │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add registry.k8s.io/pause:latest                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache add minikube-local-cache-test:functional-763073                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ functional-763073 cache delete minikube-local-cache-test:functional-763073                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl images                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ cache          │ functional-763073 cache reload                                                                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh            │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ kubectl        │ functional-763073 kubectl -- --context functional-763073 get pods                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ start          │ -p functional-763073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:36 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:36:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:36:00.490248  481598 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:36:00.490394  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490398  481598 out.go:374] Setting ErrFile to fd 2...
	I1216 04:36:00.490402  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490827  481598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:36:00.491840  481598 out.go:368] Setting JSON to false
	I1216 04:36:00.492932  481598 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11907,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:36:00.493015  481598 start.go:143] virtualization:  
	I1216 04:36:00.496736  481598 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:36:00.500271  481598 notify.go:221] Checking for updates...
	I1216 04:36:00.500857  481598 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:36:00.504041  481598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:36:00.507246  481598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:36:00.510546  481598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:36:00.513957  481598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:36:00.517802  481598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:36:00.521529  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:00.521658  481598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:36:00.547571  481598 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:36:00.547683  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.612217  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.602438298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.612309  481598 docker.go:319] overlay module found
	I1216 04:36:00.615642  481598 out.go:179] * Using the docker driver based on existing profile
	I1216 04:36:00.618516  481598 start.go:309] selected driver: docker
	I1216 04:36:00.618544  481598 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.618637  481598 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:36:00.618758  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.679148  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.669430398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.679575  481598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:36:00.679604  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:00.679655  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:00.679698  481598 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.682841  481598 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:36:00.685829  481598 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:36:00.688866  481598 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:36:00.691890  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:00.691964  481598 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:36:00.691972  481598 cache.go:65] Caching tarball of preloaded images
	I1216 04:36:00.691982  481598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:36:00.692074  481598 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:36:00.692084  481598 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:36:00.692227  481598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:36:00.712798  481598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:36:00.712810  481598 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:36:00.712824  481598 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:36:00.712856  481598 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:36:00.712923  481598 start.go:364] duration metric: took 39.237µs to acquireMachinesLock for "functional-763073"
	I1216 04:36:00.712941  481598 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:36:00.712958  481598 fix.go:54] fixHost starting: 
	I1216 04:36:00.713253  481598 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:36:00.732242  481598 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:36:00.732263  481598 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:36:00.735664  481598 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:36:00.735723  481598 machine.go:94] provisionDockerMachine start ...
	I1216 04:36:00.735809  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.753493  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.753813  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.753819  481598 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:36:00.888929  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:00.888952  481598 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:36:00.889028  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.908330  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.908643  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.908652  481598 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:36:01.055703  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:01.055772  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.082824  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.083159  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.083173  481598 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:36:01.221846  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:36:01.221862  481598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:36:01.221883  481598 ubuntu.go:190] setting up certificates
	I1216 04:36:01.221900  481598 provision.go:84] configureAuth start
	I1216 04:36:01.221962  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:01.240557  481598 provision.go:143] copyHostCerts
	I1216 04:36:01.240641  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:36:01.240650  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:36:01.240725  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:36:01.240821  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:36:01.240825  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:36:01.240849  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:36:01.240902  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:36:01.240908  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:36:01.240929  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:36:01.240972  481598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:36:01.624943  481598 provision.go:177] copyRemoteCerts
	I1216 04:36:01.624996  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:36:01.625036  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.650668  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:01.753682  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:36:01.770658  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:36:01.788383  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:36:01.805726  481598 provision.go:87] duration metric: took 583.803742ms to configureAuth
	I1216 04:36:01.805744  481598 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:36:01.805933  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:01.806039  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.826667  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.826973  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.826985  481598 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:36:02.160545  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:36:02.160560  481598 machine.go:97] duration metric: took 1.424830052s to provisionDockerMachine
	I1216 04:36:02.160570  481598 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:36:02.160582  481598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:36:02.160662  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:36:02.160707  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.182446  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.281163  481598 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:36:02.284621  481598 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:36:02.284640  481598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:36:02.284650  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:36:02.284704  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:36:02.284795  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:36:02.284876  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:36:02.284919  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:36:02.293096  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:02.311133  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:36:02.329120  481598 start.go:296] duration metric: took 168.535354ms for postStartSetup
	I1216 04:36:02.329220  481598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:36:02.329269  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.348104  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.442235  481598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:36:02.448236  481598 fix.go:56] duration metric: took 1.735283267s for fixHost
	I1216 04:36:02.448253  481598 start.go:83] releasing machines lock for "functional-763073", held for 1.735323136s
	I1216 04:36:02.448324  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:02.466005  481598 ssh_runner.go:195] Run: cat /version.json
	I1216 04:36:02.466044  481598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:36:02.466046  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.466114  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.490975  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.491519  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.685578  481598 ssh_runner.go:195] Run: systemctl --version
	I1216 04:36:02.692865  481598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:36:02.731424  481598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:36:02.735810  481598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:36:02.735877  481598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:36:02.743925  481598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:36:02.743939  481598 start.go:496] detecting cgroup driver to use...
	I1216 04:36:02.743971  481598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:36:02.744017  481598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:36:02.759444  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:36:02.772624  481598 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:36:02.772678  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:36:02.788424  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:36:02.802435  481598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:36:02.920156  481598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:36:03.035227  481598 docker.go:234] disabling docker service ...
	I1216 04:36:03.035430  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:36:03.052008  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:36:03.065420  481598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:36:03.183071  481598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:36:03.294099  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:36:03.311925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:36:03.326859  481598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:36:03.326940  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.336429  481598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:36:03.336497  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.346614  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.357523  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.366947  481598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:36:03.376549  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.385465  481598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.394383  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.404860  481598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:36:03.413465  481598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:36:03.422752  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:03.536676  481598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:36:03.720606  481598 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:36:03.720702  481598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:36:03.724603  481598 start.go:564] Will wait 60s for crictl version
	I1216 04:36:03.724660  481598 ssh_runner.go:195] Run: which crictl
	I1216 04:36:03.728340  481598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:36:03.755140  481598 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:36:03.755232  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.787753  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.823457  481598 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:36:03.826282  481598 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:36:03.843358  481598 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:36:03.850470  481598 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 04:36:03.853320  481598 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:36:03.853444  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:03.853515  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.889904  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.889916  481598 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:36:03.889975  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.917662  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.917679  481598 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:36:03.917686  481598 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:36:03.917785  481598 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:36:03.917879  481598 ssh_runner.go:195] Run: crio config
	I1216 04:36:03.990629  481598 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 04:36:03.990650  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:03.990663  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:03.990677  481598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:36:03.990700  481598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:36:03.990828  481598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:36:03.990905  481598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:36:03.999067  481598 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:36:03.999139  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:36:04.008352  481598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:36:04.030586  481598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:36:04.045153  481598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 04:36:04.060527  481598 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:36:04.065456  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:04.194475  481598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:36:04.817563  481598 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:36:04.817574  481598 certs.go:195] generating shared ca certs ...
	I1216 04:36:04.817590  481598 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:36:04.817743  481598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:36:04.817795  481598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:36:04.817801  481598 certs.go:257] generating profile certs ...
	I1216 04:36:04.817883  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:36:04.817938  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:36:04.817975  481598 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:36:04.818092  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:36:04.818123  481598 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:36:04.818130  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:36:04.818156  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:36:04.818185  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:36:04.818212  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:36:04.818262  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:04.818840  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:36:04.841132  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:36:04.865044  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:36:04.885624  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:36:04.903731  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:36:04.922117  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:36:04.940753  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:36:04.958685  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:36:04.976252  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:36:04.996895  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:36:05.024451  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:36:05.043756  481598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:36:05.056987  481598 ssh_runner.go:195] Run: openssl version
	I1216 04:36:05.063602  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.071513  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:36:05.079286  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083120  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083179  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.124591  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:36:05.132537  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.139980  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:36:05.147726  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151460  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151517  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.192644  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:36:05.200305  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.207653  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:36:05.215074  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218794  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218861  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.260201  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:36:05.267700  481598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:36:05.271723  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:36:05.312770  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:36:05.354108  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:36:05.396136  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:36:05.437154  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:36:05.478283  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:36:05.519503  481598 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:05.519581  481598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:36:05.519651  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.550651  481598 cri.go:89] found id: ""
	I1216 04:36:05.550716  481598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:36:05.558332  481598 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:36:05.558341  481598 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:36:05.558398  481598 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:36:05.566851  481598 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.567385  481598 kubeconfig.go:125] found "functional-763073" server: "https://192.168.49.2:8441"
	I1216 04:36:05.568647  481598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:36:05.577205  481598 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:21:27.024069044 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 04:36:04.056943145 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 04:36:05.577214  481598 kubeadm.go:1161] stopping kube-system containers ...
	I1216 04:36:05.577232  481598 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 04:36:05.577291  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.613634  481598 cri.go:89] found id: ""
	I1216 04:36:05.613693  481598 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 04:36:05.631237  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:36:05.639373  481598 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 04:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 16 04:25 /etc/kubernetes/scheduler.conf
	
	I1216 04:36:05.639436  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:36:05.647869  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:36:05.655663  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.655719  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:36:05.663273  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.671183  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.671243  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.678591  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:36:05.686132  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.686188  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:36:05.693450  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:36:05.701540  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:05.748475  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.491126  481598 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742626292s)
	I1216 04:36:07.491187  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.697669  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.751926  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.807760  481598 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:36:07.807833  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.308888  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.808759  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.308977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.808282  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.307985  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.808951  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.308256  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.808637  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.308024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.307998  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.808659  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.308930  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.808879  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.308001  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.808638  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.308025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.808728  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.308874  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.807914  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.308153  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.808033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.308758  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.808709  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.308226  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.808665  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.308593  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.808198  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.308415  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.808582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.307967  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.808028  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.308762  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.808091  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.308960  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.308423  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.808157  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.308038  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.808057  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.308023  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.808946  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.308972  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.807943  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.307922  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.807937  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.308667  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.808045  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.308212  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.808619  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.308733  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.808032  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.308860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.808072  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.308007  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.808024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.307979  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.808901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.308808  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.808025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.308031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.808882  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.308837  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.807987  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.307961  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.808950  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.308266  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.808923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.308656  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.808860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.308034  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.808867  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.308569  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.307977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.308633  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.808122  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.307944  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.808798  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.308017  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.807983  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.308319  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.807968  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.807982  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.308783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.808921  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.308093  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.808677  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.308049  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.808424  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.308936  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.808179  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.308330  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.808590  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.308098  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.808705  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.308058  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.807911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.308881  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.808413  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.308020  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.808592  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.308911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.808175  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.307995  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.808695  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.808771  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.308033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.808432  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.308848  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.807977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.307980  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.808869  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.308433  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.808830  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.308901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.808015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:07.808111  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:07.837945  481598 cri.go:89] found id: ""
	I1216 04:37:07.837959  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.837965  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:07.837970  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:07.838028  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:07.869351  481598 cri.go:89] found id: ""
	I1216 04:37:07.869366  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.869372  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:07.869377  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:07.869436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:07.907276  481598 cri.go:89] found id: ""
	I1216 04:37:07.907290  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.907297  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:07.907302  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:07.907360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:07.933358  481598 cri.go:89] found id: ""
	I1216 04:37:07.933373  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.933380  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:07.933385  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:07.933443  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:07.960678  481598 cri.go:89] found id: ""
	I1216 04:37:07.960692  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.960699  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:07.960704  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:07.960761  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:07.986399  481598 cri.go:89] found id: ""
	I1216 04:37:07.986414  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.986421  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:07.986426  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:07.986483  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:08.015016  481598 cri.go:89] found id: ""
	I1216 04:37:08.015031  481598 logs.go:282] 0 containers: []
	W1216 04:37:08.015038  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:08.015046  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:08.015057  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:08.088739  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:08.088761  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:08.107036  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:08.107052  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:08.176727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:08.176736  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:08.176749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:08.244460  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:08.244483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:10.772766  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:10.783210  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:10.783271  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:10.811358  481598 cri.go:89] found id: ""
	I1216 04:37:10.811374  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.811382  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:10.811388  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:10.811451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:10.841691  481598 cri.go:89] found id: ""
	I1216 04:37:10.841705  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.841712  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:10.841717  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:10.841792  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:10.869111  481598 cri.go:89] found id: ""
	I1216 04:37:10.869133  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.869141  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:10.869146  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:10.869227  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:10.897617  481598 cri.go:89] found id: ""
	I1216 04:37:10.897632  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.897640  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:10.897646  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:10.897709  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:10.924814  481598 cri.go:89] found id: ""
	I1216 04:37:10.924829  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.924838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:10.924849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:10.924909  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:10.951147  481598 cri.go:89] found id: ""
	I1216 04:37:10.951162  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.951170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:10.951181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:10.951240  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:10.977944  481598 cri.go:89] found id: ""
	I1216 04:37:10.977958  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.977965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:10.977973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:10.977984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:11.046933  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:11.046953  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:11.062324  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:11.062340  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:11.128033  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:11.128044  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:11.128055  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:11.195835  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:11.195855  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:13.729443  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:13.739852  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:13.739911  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:13.765288  481598 cri.go:89] found id: ""
	I1216 04:37:13.765303  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.765310  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:13.765315  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:13.765372  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:13.791619  481598 cri.go:89] found id: ""
	I1216 04:37:13.791634  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.791641  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:13.791646  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:13.791713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:13.829008  481598 cri.go:89] found id: ""
	I1216 04:37:13.829021  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.829028  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:13.829033  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:13.829115  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:13.860708  481598 cri.go:89] found id: ""
	I1216 04:37:13.860722  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.860729  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:13.860734  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:13.860795  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:13.890573  481598 cri.go:89] found id: ""
	I1216 04:37:13.890587  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.890594  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:13.890600  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:13.890659  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:13.921520  481598 cri.go:89] found id: ""
	I1216 04:37:13.921535  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.921543  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:13.921555  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:13.921616  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:13.950847  481598 cri.go:89] found id: ""
	I1216 04:37:13.950864  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.950882  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:13.950890  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:13.950901  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:13.965697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:13.965713  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:14.040284  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:14.040295  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:14.040307  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:14.114244  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:14.114266  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:14.146926  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:14.146942  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.715163  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:16.725607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:16.725688  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:16.751194  481598 cri.go:89] found id: ""
	I1216 04:37:16.751208  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.751215  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:16.751220  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:16.751277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:16.780407  481598 cri.go:89] found id: ""
	I1216 04:37:16.780421  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.780428  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:16.780433  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:16.780496  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:16.806409  481598 cri.go:89] found id: ""
	I1216 04:37:16.806424  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.806431  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:16.806436  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:16.806504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:16.838220  481598 cri.go:89] found id: ""
	I1216 04:37:16.838235  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.838242  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:16.838247  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:16.838306  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:16.866315  481598 cri.go:89] found id: ""
	I1216 04:37:16.866329  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.866336  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:16.866341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:16.866414  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:16.899090  481598 cri.go:89] found id: ""
	I1216 04:37:16.899105  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.899112  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:16.899117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:16.899178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:16.924588  481598 cri.go:89] found id: ""
	I1216 04:37:16.924603  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.924611  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:16.924618  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:16.924630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.993464  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:16.993485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:17.009562  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:17.009582  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:17.075397  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:17.075408  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:17.075421  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:17.144979  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:17.145001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:19.675069  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:19.685090  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:19.685149  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:19.711697  481598 cri.go:89] found id: ""
	I1216 04:37:19.711712  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.711719  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:19.711724  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:19.711781  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:19.737017  481598 cri.go:89] found id: ""
	I1216 04:37:19.737031  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.737038  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:19.737043  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:19.737129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:19.764129  481598 cri.go:89] found id: ""
	I1216 04:37:19.764143  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.764150  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:19.764155  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:19.764210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:19.790063  481598 cri.go:89] found id: ""
	I1216 04:37:19.790077  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.790084  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:19.790098  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:19.790154  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:19.821689  481598 cri.go:89] found id: ""
	I1216 04:37:19.821703  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.821710  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:19.821716  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:19.821774  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:19.854088  481598 cri.go:89] found id: ""
	I1216 04:37:19.854103  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.854111  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:19.854116  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:19.854178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:19.893475  481598 cri.go:89] found id: ""
	I1216 04:37:19.893496  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.893505  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:19.893513  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:19.893524  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:19.961902  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:19.961916  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:19.961927  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:20.031206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:20.031233  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:20.062576  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:20.062596  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:20.132798  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:20.132818  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:22.649716  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:22.659636  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:22.659698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:22.684490  481598 cri.go:89] found id: ""
	I1216 04:37:22.684505  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.684512  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:22.684542  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:22.684599  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:22.709083  481598 cri.go:89] found id: ""
	I1216 04:37:22.709098  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.709105  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:22.709110  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:22.709165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:22.734473  481598 cri.go:89] found id: ""
	I1216 04:37:22.734487  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.734494  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:22.734499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:22.734557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:22.759459  481598 cri.go:89] found id: ""
	I1216 04:37:22.759473  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.759480  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:22.759485  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:22.759540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:22.784416  481598 cri.go:89] found id: ""
	I1216 04:37:22.784430  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.784437  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:22.784442  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:22.784508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:22.808823  481598 cri.go:89] found id: ""
	I1216 04:37:22.808837  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.808844  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:22.808849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:22.808906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:22.845939  481598 cri.go:89] found id: ""
	I1216 04:37:22.845965  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.845973  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:22.845980  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:22.846001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:22.939972  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:22.939998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:22.969984  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:22.970003  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:23.041537  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:23.041560  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:23.059445  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:23.059461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:23.127407  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.628052  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:25.638431  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:25.638504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:25.665151  481598 cri.go:89] found id: ""
	I1216 04:37:25.665164  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.665172  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:25.665176  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:25.665249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:25.695604  481598 cri.go:89] found id: ""
	I1216 04:37:25.695617  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.695625  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:25.695630  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:25.695691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:25.720754  481598 cri.go:89] found id: ""
	I1216 04:37:25.720768  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.720775  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:25.720780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:25.720839  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:25.746771  481598 cri.go:89] found id: ""
	I1216 04:37:25.746785  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.746792  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:25.746797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:25.746857  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:25.776233  481598 cri.go:89] found id: ""
	I1216 04:37:25.776247  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.776264  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:25.776269  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:25.776342  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:25.803891  481598 cri.go:89] found id: ""
	I1216 04:37:25.803914  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.803922  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:25.803927  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:25.804021  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:25.845002  481598 cri.go:89] found id: ""
	I1216 04:37:25.845016  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.845023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:25.845040  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:25.845053  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:25.921736  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.921746  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:25.921757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:25.989735  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:25.989756  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:26.020992  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:26.021012  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:26.094837  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:26.094856  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:28.610236  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:28.620641  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:28.620702  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:28.648449  481598 cri.go:89] found id: ""
	I1216 04:37:28.648463  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.648470  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:28.648480  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:28.648539  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:28.675317  481598 cri.go:89] found id: ""
	I1216 04:37:28.675332  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.675339  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:28.675344  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:28.675402  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:28.700978  481598 cri.go:89] found id: ""
	I1216 04:37:28.700992  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.700998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:28.701003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:28.701104  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:28.726354  481598 cri.go:89] found id: ""
	I1216 04:37:28.726367  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.726374  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:28.726379  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:28.726436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:28.752843  481598 cri.go:89] found id: ""
	I1216 04:37:28.752857  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.752864  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:28.752869  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:28.752927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:28.778190  481598 cri.go:89] found id: ""
	I1216 04:37:28.778205  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.778212  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:28.778217  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:28.778280  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:28.803029  481598 cri.go:89] found id: ""
	I1216 04:37:28.803044  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.803051  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:28.803059  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:28.803070  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:28.896742  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:28.896763  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:28.896776  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:28.964206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:28.964228  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:28.996487  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:28.996503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:29.063978  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:29.063998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:31.580896  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:31.591181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:31.591249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:31.616263  481598 cri.go:89] found id: ""
	I1216 04:37:31.616277  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.616284  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:31.616289  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:31.616345  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:31.641685  481598 cri.go:89] found id: ""
	I1216 04:37:31.641700  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.641707  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:31.641712  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:31.641771  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:31.667472  481598 cri.go:89] found id: ""
	I1216 04:37:31.667487  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.667495  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:31.667500  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:31.667557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:31.697212  481598 cri.go:89] found id: ""
	I1216 04:37:31.697241  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.697248  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:31.697253  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:31.697311  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:31.723185  481598 cri.go:89] found id: ""
	I1216 04:37:31.723199  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.723207  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:31.723212  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:31.723273  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:31.749934  481598 cri.go:89] found id: ""
	I1216 04:37:31.749957  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.749965  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:31.749970  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:31.750035  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:31.776884  481598 cri.go:89] found id: ""
	I1216 04:37:31.776905  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.776911  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:31.776922  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:31.776933  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:31.856147  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:31.856168  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:31.856188  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:31.928187  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:31.928207  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:31.960005  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:31.960023  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:32.031454  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:32.031474  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.550103  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:34.560823  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:34.560882  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:34.587067  481598 cri.go:89] found id: ""
	I1216 04:37:34.587082  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.587092  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:34.587097  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:34.587160  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:34.613934  481598 cri.go:89] found id: ""
	I1216 04:37:34.613949  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.613956  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:34.613961  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:34.614018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:34.639997  481598 cri.go:89] found id: ""
	I1216 04:37:34.640011  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.640018  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:34.640023  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:34.640087  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:34.666140  481598 cri.go:89] found id: ""
	I1216 04:37:34.666154  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.666161  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:34.666166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:34.666226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:34.692116  481598 cri.go:89] found id: ""
	I1216 04:37:34.692131  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.692138  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:34.692143  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:34.692203  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:34.717134  481598 cri.go:89] found id: ""
	I1216 04:37:34.717148  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.717156  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:34.717161  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:34.717228  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:34.743931  481598 cri.go:89] found id: ""
	I1216 04:37:34.743946  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.743963  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:34.743971  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:34.743983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:34.809826  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:34.809849  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.827619  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:34.827636  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:34.903666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:34.903676  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:34.903686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:34.972944  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:34.972967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:37.507549  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:37.517802  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:37.517863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:37.543131  481598 cri.go:89] found id: ""
	I1216 04:37:37.543147  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.543155  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:37.543167  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:37.543224  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:37.568202  481598 cri.go:89] found id: ""
	I1216 04:37:37.568216  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.568223  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:37.568231  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:37.568288  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:37.593976  481598 cri.go:89] found id: ""
	I1216 04:37:37.593991  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.593998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:37.594003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:37.594066  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:37.619760  481598 cri.go:89] found id: ""
	I1216 04:37:37.619774  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.619781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:37.619787  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:37.619848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:37.644836  481598 cri.go:89] found id: ""
	I1216 04:37:37.644850  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.644857  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:37.644862  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:37.644921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:37.670454  481598 cri.go:89] found id: ""
	I1216 04:37:37.670468  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.670476  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:37.670481  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:37.670537  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:37.695742  481598 cri.go:89] found id: ""
	I1216 04:37:37.695762  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.695769  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:37.695777  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:37.695787  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:37.759713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:37.759732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:37.774589  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:37.774606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:37.849933  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:37.849945  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:37.849955  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:37.928468  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:37.928489  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.459800  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:40.470285  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:40.470349  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:40.499380  481598 cri.go:89] found id: ""
	I1216 04:37:40.499394  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.499401  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:40.499406  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:40.499464  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:40.528986  481598 cri.go:89] found id: ""
	I1216 04:37:40.529000  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.529007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:40.529012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:40.529089  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:40.555623  481598 cri.go:89] found id: ""
	I1216 04:37:40.555638  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.555646  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:40.555651  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:40.555708  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:40.581298  481598 cri.go:89] found id: ""
	I1216 04:37:40.581312  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.581319  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:40.581324  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:40.581382  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:40.611085  481598 cri.go:89] found id: ""
	I1216 04:37:40.611099  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.611106  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:40.611113  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:40.611173  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:40.636162  481598 cri.go:89] found id: ""
	I1216 04:37:40.636178  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.636185  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:40.636190  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:40.636250  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:40.664257  481598 cri.go:89] found id: ""
	I1216 04:37:40.664272  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.664279  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:40.664287  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:40.664299  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:40.680011  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:40.680027  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:40.745907  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:40.745919  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:40.745932  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:40.814715  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:40.814735  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.859159  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:40.859181  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.432718  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:43.443193  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:43.443264  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:43.469157  481598 cri.go:89] found id: ""
	I1216 04:37:43.469187  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.469195  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:43.469200  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:43.469323  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:43.494783  481598 cri.go:89] found id: ""
	I1216 04:37:43.494796  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.494804  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:43.494809  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:43.494869  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:43.521488  481598 cri.go:89] found id: ""
	I1216 04:37:43.521502  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.521509  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:43.521514  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:43.521573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:43.550707  481598 cri.go:89] found id: ""
	I1216 04:37:43.550721  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.550728  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:43.550733  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:43.550791  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:43.579977  481598 cri.go:89] found id: ""
	I1216 04:37:43.579991  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.579997  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:43.580002  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:43.580064  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:43.605041  481598 cri.go:89] found id: ""
	I1216 04:37:43.605056  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.605143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:43.605149  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:43.605208  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:43.631632  481598 cri.go:89] found id: ""
	I1216 04:37:43.631658  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.631665  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:43.631672  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:43.631691  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.701085  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:43.701111  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:43.716379  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:43.716401  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:43.778569  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:43.778594  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:43.778606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:43.850663  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:43.850686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.388473  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:46.398649  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:46.398713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:46.425758  481598 cri.go:89] found id: ""
	I1216 04:37:46.425772  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.425780  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:46.425785  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:46.425843  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:46.453363  481598 cri.go:89] found id: ""
	I1216 04:37:46.453377  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.453384  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:46.453389  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:46.453450  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:46.479051  481598 cri.go:89] found id: ""
	I1216 04:37:46.479066  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.479074  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:46.479079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:46.479135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:46.509758  481598 cri.go:89] found id: ""
	I1216 04:37:46.509773  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.509781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:46.509786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:46.509849  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:46.536775  481598 cri.go:89] found id: ""
	I1216 04:37:46.536788  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.536795  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:46.536801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:46.536870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:46.562238  481598 cri.go:89] found id: ""
	I1216 04:37:46.562253  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.562262  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:46.562268  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:46.562326  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:46.588577  481598 cri.go:89] found id: ""
	I1216 04:37:46.588591  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.588598  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:46.588606  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:46.588617  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:46.658427  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:46.658447  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.692280  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:46.692304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:46.758854  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:46.758874  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:46.778062  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:46.778079  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:46.855875  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.357557  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:49.367602  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:49.367665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:49.393022  481598 cri.go:89] found id: ""
	I1216 04:37:49.393037  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.393044  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:49.393049  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:49.393125  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:49.421701  481598 cri.go:89] found id: ""
	I1216 04:37:49.421716  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.421723  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:49.421728  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:49.421789  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:49.447139  481598 cri.go:89] found id: ""
	I1216 04:37:49.447154  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.447161  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:49.447166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:49.447226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:49.472003  481598 cri.go:89] found id: ""
	I1216 04:37:49.472018  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.472026  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:49.472032  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:49.472090  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:49.497762  481598 cri.go:89] found id: ""
	I1216 04:37:49.497782  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.497790  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:49.497794  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:49.497853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:49.527970  481598 cri.go:89] found id: ""
	I1216 04:37:49.527984  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.527992  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:49.527997  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:49.528055  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:49.554573  481598 cri.go:89] found id: ""
	I1216 04:37:49.554587  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.554596  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:49.554604  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:49.554615  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:49.620959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:49.620979  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:49.636096  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:49.636115  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:49.705535  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.705545  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:49.705556  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:49.774081  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:49.774101  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.303119  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:52.313248  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:52.313317  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:52.339092  481598 cri.go:89] found id: ""
	I1216 04:37:52.339106  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.339113  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:52.339118  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:52.339181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:52.370928  481598 cri.go:89] found id: ""
	I1216 04:37:52.370942  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.370949  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:52.370954  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:52.371011  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:52.395986  481598 cri.go:89] found id: ""
	I1216 04:37:52.396000  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.396007  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:52.396012  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:52.396068  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:52.425010  481598 cri.go:89] found id: ""
	I1216 04:37:52.425024  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.425031  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:52.425036  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:52.425118  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:52.450781  481598 cri.go:89] found id: ""
	I1216 04:37:52.450796  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.450803  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:52.450808  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:52.450867  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:52.476589  481598 cri.go:89] found id: ""
	I1216 04:37:52.476603  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.476611  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:52.476617  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:52.476675  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:52.503929  481598 cri.go:89] found id: ""
	I1216 04:37:52.503944  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.503951  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:52.503959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:52.503970  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:52.519124  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:52.519149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:52.587049  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:52.587060  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:52.587072  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:52.657393  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:52.657415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.686271  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:52.686289  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.258225  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:55.268276  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:55.268339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:55.295458  481598 cri.go:89] found id: ""
	I1216 04:37:55.295471  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.295479  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:55.295484  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:55.295550  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:55.322181  481598 cri.go:89] found id: ""
	I1216 04:37:55.322195  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.322202  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:55.322207  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:55.322315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:55.347301  481598 cri.go:89] found id: ""
	I1216 04:37:55.347316  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.347323  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:55.347329  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:55.347390  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:55.372973  481598 cri.go:89] found id: ""
	I1216 04:37:55.372988  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.372995  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:55.373000  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:55.373057  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:55.398159  481598 cri.go:89] found id: ""
	I1216 04:37:55.398173  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.398179  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:55.398184  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:55.398245  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:55.423108  481598 cri.go:89] found id: ""
	I1216 04:37:55.423122  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.423128  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:55.423133  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:55.423198  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:55.449345  481598 cri.go:89] found id: ""
	I1216 04:37:55.449360  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.449367  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:55.449375  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:55.449397  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.514641  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:55.514662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:55.529353  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:55.529369  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:55.598810  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:55.598830  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:55.598842  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:55.666947  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:55.666967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:58.197584  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:58.208946  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:58.209018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:58.234805  481598 cri.go:89] found id: ""
	I1216 04:37:58.234819  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.234826  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:58.234831  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:58.234886  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:58.259158  481598 cri.go:89] found id: ""
	I1216 04:37:58.259171  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.259178  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:58.259183  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:58.259241  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:58.286151  481598 cri.go:89] found id: ""
	I1216 04:37:58.286165  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.286172  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:58.286177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:58.286234  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:58.310737  481598 cri.go:89] found id: ""
	I1216 04:37:58.310750  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.310757  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:58.310762  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:58.310817  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:58.334963  481598 cri.go:89] found id: ""
	I1216 04:37:58.334978  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.334985  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:58.334989  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:58.335054  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:58.363884  481598 cri.go:89] found id: ""
	I1216 04:37:58.363910  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.363918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:58.363924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:58.363992  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:58.387948  481598 cri.go:89] found id: ""
	I1216 04:37:58.387961  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.387968  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:58.387977  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:58.387988  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:58.452873  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:58.452892  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:58.468670  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:58.468688  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:58.537376  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:58.537385  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:58.537396  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:58.606317  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:58.606339  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:01.135427  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:01.146890  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:01.146955  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:01.174260  481598 cri.go:89] found id: ""
	I1216 04:38:01.174275  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.174282  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:01.174287  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:01.174347  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:01.199944  481598 cri.go:89] found id: ""
	I1216 04:38:01.199958  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.199965  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:01.199970  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:01.200033  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:01.228798  481598 cri.go:89] found id: ""
	I1216 04:38:01.228814  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.228820  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:01.228825  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:01.228884  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:01.255775  481598 cri.go:89] found id: ""
	I1216 04:38:01.255789  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.255796  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:01.255801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:01.255860  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:01.281657  481598 cri.go:89] found id: ""
	I1216 04:38:01.281671  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.281678  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:01.281683  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:01.281742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:01.307766  481598 cri.go:89] found id: ""
	I1216 04:38:01.307779  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.307786  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:01.307791  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:01.307851  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:01.333581  481598 cri.go:89] found id: ""
	I1216 04:38:01.333595  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.333602  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:01.333610  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:01.333621  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:01.399337  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:01.399356  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:01.414266  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:01.414283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:01.482637  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:01.482650  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:01.482662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:01.550883  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:01.550905  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.081199  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:04.093060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:04.093177  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:04.125499  481598 cri.go:89] found id: ""
	I1216 04:38:04.125513  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.125521  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:04.125526  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:04.125595  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:04.151973  481598 cri.go:89] found id: ""
	I1216 04:38:04.151987  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.151994  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:04.151999  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:04.152058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:04.180246  481598 cri.go:89] found id: ""
	I1216 04:38:04.180260  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.180266  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:04.180271  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:04.180328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:04.207652  481598 cri.go:89] found id: ""
	I1216 04:38:04.207665  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.207672  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:04.207678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:04.207735  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:04.233457  481598 cri.go:89] found id: ""
	I1216 04:38:04.233470  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.233477  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:04.233483  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:04.233540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:04.259854  481598 cri.go:89] found id: ""
	I1216 04:38:04.259868  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.259875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:04.259880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:04.259941  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:04.285804  481598 cri.go:89] found id: ""
	I1216 04:38:04.285818  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.285825  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:04.285832  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:04.285843  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:04.364313  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:04.364343  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.397537  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:04.397559  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:04.466334  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:04.466358  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:04.481695  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:04.481712  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:04.549601  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:07.049858  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:07.060224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:07.060286  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:07.095538  481598 cri.go:89] found id: ""
	I1216 04:38:07.095552  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.095558  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:07.095572  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:07.095630  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:07.134098  481598 cri.go:89] found id: ""
	I1216 04:38:07.134113  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.134120  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:07.134125  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:07.134181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:07.160282  481598 cri.go:89] found id: ""
	I1216 04:38:07.160296  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.160312  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:07.160317  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:07.160375  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:07.186194  481598 cri.go:89] found id: ""
	I1216 04:38:07.186208  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.186215  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:07.186220  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:07.186277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:07.211185  481598 cri.go:89] found id: ""
	I1216 04:38:07.211198  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.211211  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:07.211216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:07.211274  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:07.236131  481598 cri.go:89] found id: ""
	I1216 04:38:07.236145  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.236171  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:07.236177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:07.236243  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:07.262438  481598 cri.go:89] found id: ""
	I1216 04:38:07.262452  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.262459  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:07.262467  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:07.262477  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:07.331225  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:07.331246  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:07.359219  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:07.359236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:07.426207  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:07.426225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:07.441345  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:07.441364  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:07.509422  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.011147  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:10.023261  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:10.023327  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:10.050971  481598 cri.go:89] found id: ""
	I1216 04:38:10.050986  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.050994  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:10.050999  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:10.051073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:10.085339  481598 cri.go:89] found id: ""
	I1216 04:38:10.085353  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.085360  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:10.085366  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:10.085434  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:10.124529  481598 cri.go:89] found id: ""
	I1216 04:38:10.124543  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.124551  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:10.124556  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:10.124624  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:10.164418  481598 cri.go:89] found id: ""
	I1216 04:38:10.164434  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.164442  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:10.164448  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:10.164517  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:10.190732  481598 cri.go:89] found id: ""
	I1216 04:38:10.190746  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.190753  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:10.190758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:10.190815  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:10.216314  481598 cri.go:89] found id: ""
	I1216 04:38:10.216339  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.216346  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:10.216352  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:10.216419  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:10.241726  481598 cri.go:89] found id: ""
	I1216 04:38:10.241747  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.241755  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:10.241768  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:10.241780  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:10.314496  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.314506  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:10.314520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:10.383929  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:10.383952  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:10.414686  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:10.414703  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:10.480296  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:10.480315  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:12.997386  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:13.013029  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:13.013152  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:13.043756  481598 cri.go:89] found id: ""
	I1216 04:38:13.043772  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.043779  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:13.043784  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:13.043841  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:13.078538  481598 cri.go:89] found id: ""
	I1216 04:38:13.078552  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.078559  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:13.078564  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:13.078625  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:13.107509  481598 cri.go:89] found id: ""
	I1216 04:38:13.107523  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.107530  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:13.107535  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:13.107590  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:13.144886  481598 cri.go:89] found id: ""
	I1216 04:38:13.144900  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.144907  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:13.144912  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:13.144967  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:13.172261  481598 cri.go:89] found id: ""
	I1216 04:38:13.172275  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.172282  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:13.172287  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:13.172346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:13.200255  481598 cri.go:89] found id: ""
	I1216 04:38:13.200270  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.200277  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:13.200282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:13.200339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:13.231840  481598 cri.go:89] found id: ""
	I1216 04:38:13.231855  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.231864  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:13.231871  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:13.231882  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:13.305140  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:13.305162  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:13.320119  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:13.320135  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:13.384652  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:13.384662  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:13.384672  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:13.452891  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:13.452913  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:15.986467  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:15.996642  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:15.996705  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:16.023730  481598 cri.go:89] found id: ""
	I1216 04:38:16.023745  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.023752  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:16.023757  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:16.023814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:16.048187  481598 cri.go:89] found id: ""
	I1216 04:38:16.048202  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.048209  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:16.048214  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:16.048270  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:16.084197  481598 cri.go:89] found id: ""
	I1216 04:38:16.084210  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.084217  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:16.084222  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:16.084279  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:16.114000  481598 cri.go:89] found id: ""
	I1216 04:38:16.114014  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.114021  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:16.114026  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:16.114095  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:16.146003  481598 cri.go:89] found id: ""
	I1216 04:38:16.146016  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.146023  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:16.146028  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:16.146085  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:16.171053  481598 cri.go:89] found id: ""
	I1216 04:38:16.171067  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.171074  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:16.171079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:16.171146  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:16.195607  481598 cri.go:89] found id: ""
	I1216 04:38:16.195621  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.195629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:16.195637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:16.195647  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:16.261510  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:16.261531  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:16.276956  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:16.276972  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:16.337904  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:16.337914  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:16.337925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:16.407434  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:16.407456  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:18.938513  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:18.948612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:18.948671  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:18.973989  481598 cri.go:89] found id: ""
	I1216 04:38:18.974004  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.974011  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:18.974016  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:18.974076  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:18.999416  481598 cri.go:89] found id: ""
	I1216 04:38:18.999430  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.999437  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:18.999442  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:18.999499  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:19.036420  481598 cri.go:89] found id: ""
	I1216 04:38:19.036433  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.036440  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:19.036444  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:19.036500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:19.063584  481598 cri.go:89] found id: ""
	I1216 04:38:19.063600  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.063617  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:19.063623  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:19.063694  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:19.099252  481598 cri.go:89] found id: ""
	I1216 04:38:19.099275  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.099283  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:19.099289  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:19.099363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:19.126285  481598 cri.go:89] found id: ""
	I1216 04:38:19.126307  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.126315  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:19.126320  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:19.126387  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:19.151707  481598 cri.go:89] found id: ""
	I1216 04:38:19.151722  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.151738  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:19.151746  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:19.151757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:19.216698  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:19.216723  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:19.231764  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:19.231783  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:19.299324  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:19.299334  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:19.299344  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:19.368556  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:19.368580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:21.906105  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:21.916147  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:21.916206  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:21.941307  481598 cri.go:89] found id: ""
	I1216 04:38:21.941321  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.941328  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:21.941333  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:21.941399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:21.966745  481598 cri.go:89] found id: ""
	I1216 04:38:21.966760  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.966767  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:21.966772  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:21.966831  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:21.996091  481598 cri.go:89] found id: ""
	I1216 04:38:21.996106  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.996113  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:21.996117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:21.996176  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:22.022731  481598 cri.go:89] found id: ""
	I1216 04:38:22.022746  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.022753  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:22.022758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:22.022820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:22.055034  481598 cri.go:89] found id: ""
	I1216 04:38:22.055048  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.055067  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:22.055072  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:22.055136  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:22.106853  481598 cri.go:89] found id: ""
	I1216 04:38:22.106868  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.106875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:22.106880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:22.106949  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:22.143371  481598 cri.go:89] found id: ""
	I1216 04:38:22.143385  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.143392  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:22.143399  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:22.143410  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:22.209056  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:22.209083  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:22.209096  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:22.276728  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:22.276748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:22.308467  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:22.308483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:22.373121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:22.373141  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:24.888068  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:24.898375  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:24.898438  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:24.922926  481598 cri.go:89] found id: ""
	I1216 04:38:24.922940  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.922953  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:24.922958  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:24.923018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:24.948274  481598 cri.go:89] found id: ""
	I1216 04:38:24.948288  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.948296  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:24.948300  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:24.948366  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:24.973866  481598 cri.go:89] found id: ""
	I1216 04:38:24.973880  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.973888  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:24.973893  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:24.973950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:24.999743  481598 cri.go:89] found id: ""
	I1216 04:38:24.999757  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.999764  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:24.999769  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:24.999827  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:25.030266  481598 cri.go:89] found id: ""
	I1216 04:38:25.030280  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.030298  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:25.030303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:25.030363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:25.055976  481598 cri.go:89] found id: ""
	I1216 04:38:25.055991  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.056008  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:25.056014  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:25.056070  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:25.096522  481598 cri.go:89] found id: ""
	I1216 04:38:25.096537  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.096553  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:25.096568  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:25.096580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:25.171632  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:25.171649  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:25.171661  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:25.239309  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:25.239330  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:25.268791  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:25.268807  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:25.345864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:25.345887  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:27.863617  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:27.874797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:27.874872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:27.904044  481598 cri.go:89] found id: ""
	I1216 04:38:27.904057  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.904064  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:27.904070  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:27.904135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:27.930157  481598 cri.go:89] found id: ""
	I1216 04:38:27.930172  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.930179  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:27.930184  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:27.930248  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:27.960176  481598 cri.go:89] found id: ""
	I1216 04:38:27.960203  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.960211  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:27.960216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:27.960287  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:27.986202  481598 cri.go:89] found id: ""
	I1216 04:38:27.986215  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.986222  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:27.986227  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:27.986284  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:28.017804  481598 cri.go:89] found id: ""
	I1216 04:38:28.017818  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.017825  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:28.017830  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:28.017899  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:28.048381  481598 cri.go:89] found id: ""
	I1216 04:38:28.048397  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.048404  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:28.048410  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:28.048469  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:28.089010  481598 cri.go:89] found id: ""
	I1216 04:38:28.089024  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.089032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:28.089040  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:28.089051  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:28.107163  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:28.107185  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:28.185125  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:28.185136  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:28.185146  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:28.253973  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:28.253993  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:28.284589  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:28.284611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:30.850377  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:30.860658  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:30.860717  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:30.885504  481598 cri.go:89] found id: ""
	I1216 04:38:30.885519  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.885526  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:30.885531  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:30.885592  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:30.910273  481598 cri.go:89] found id: ""
	I1216 04:38:30.910287  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.910294  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:30.910299  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:30.910360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:30.935120  481598 cri.go:89] found id: ""
	I1216 04:38:30.935134  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.935140  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:30.935145  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:30.935200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:30.960866  481598 cri.go:89] found id: ""
	I1216 04:38:30.960879  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.960886  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:30.960891  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:30.960947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:30.986279  481598 cri.go:89] found id: ""
	I1216 04:38:30.986294  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.986302  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:30.986306  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:30.986367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:31.014463  481598 cri.go:89] found id: ""
	I1216 04:38:31.014486  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.014493  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:31.014499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:31.014561  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:31.041177  481598 cri.go:89] found id: ""
	I1216 04:38:31.041198  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.041205  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:31.041213  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:31.041248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:31.083930  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:31.083946  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:31.155612  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:31.155632  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:31.171599  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:31.171616  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:31.238570  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:31.238580  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:31.238590  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:33.806752  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:33.816682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:33.816748  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:33.841422  481598 cri.go:89] found id: ""
	I1216 04:38:33.841437  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.841444  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:33.841449  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:33.841508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:33.866870  481598 cri.go:89] found id: ""
	I1216 04:38:33.866884  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.866891  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:33.866896  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:33.866954  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:33.892338  481598 cri.go:89] found id: ""
	I1216 04:38:33.892352  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.892360  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:33.892365  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:33.892428  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:33.920004  481598 cri.go:89] found id: ""
	I1216 04:38:33.920018  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.920025  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:33.920030  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:33.920088  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:33.950159  481598 cri.go:89] found id: ""
	I1216 04:38:33.950173  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.950180  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:33.950185  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:33.950244  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:33.976065  481598 cri.go:89] found id: ""
	I1216 04:38:33.976079  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.976086  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:33.976092  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:33.976172  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:34.001694  481598 cri.go:89] found id: ""
	I1216 04:38:34.001710  481598 logs.go:282] 0 containers: []
	W1216 04:38:34.001721  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:34.001729  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:34.001741  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:34.041633  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:34.041651  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:34.108611  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:34.108630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:34.125509  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:34.125525  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:34.196710  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:34.196735  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:34.196746  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:36.764814  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:36.774892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:36.774950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:36.800624  481598 cri.go:89] found id: ""
	I1216 04:38:36.800640  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.800647  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:36.800652  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:36.800715  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:36.826259  481598 cri.go:89] found id: ""
	I1216 04:38:36.826274  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.826281  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:36.826286  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:36.826343  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:36.852246  481598 cri.go:89] found id: ""
	I1216 04:38:36.852269  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.852277  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:36.852282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:36.852351  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:36.877659  481598 cri.go:89] found id: ""
	I1216 04:38:36.877680  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.877688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:36.877693  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:36.877752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:36.903365  481598 cri.go:89] found id: ""
	I1216 04:38:36.903379  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.903385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:36.903390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:36.903446  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:36.928313  481598 cri.go:89] found id: ""
	I1216 04:38:36.928328  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.928335  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:36.928341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:36.928399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:36.953145  481598 cri.go:89] found id: ""
	I1216 04:38:36.953158  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.953165  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:36.953172  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:36.953182  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:37.018934  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:37.018956  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:37.036483  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:37.036500  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:37.114492  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:37.114503  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:37.114514  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:37.191646  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:37.191667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:39.722033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:39.731793  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:39.731852  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:39.756808  481598 cri.go:89] found id: ""
	I1216 04:38:39.756822  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.756829  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:39.756834  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:39.756891  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:39.782419  481598 cri.go:89] found id: ""
	I1216 04:38:39.782440  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.782448  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:39.782453  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:39.782510  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:39.807545  481598 cri.go:89] found id: ""
	I1216 04:38:39.807559  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.807576  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:39.807581  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:39.807639  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:39.836801  481598 cri.go:89] found id: ""
	I1216 04:38:39.836816  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.836832  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:39.836844  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:39.836914  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:39.861851  481598 cri.go:89] found id: ""
	I1216 04:38:39.861865  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.861872  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:39.861877  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:39.861935  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:39.891116  481598 cri.go:89] found id: ""
	I1216 04:38:39.891130  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.891137  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:39.891144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:39.891200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:39.917011  481598 cri.go:89] found id: ""
	I1216 04:38:39.917026  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.917032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:39.917040  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:39.917050  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:39.983103  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:39.983124  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:39.997812  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:39.997829  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:40.072880  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:40.072890  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:40.072902  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:40.155262  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:40.155284  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:42.686177  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:42.696709  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:42.696766  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:42.723669  481598 cri.go:89] found id: ""
	I1216 04:38:42.723684  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.723691  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:42.723697  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:42.723762  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:42.751573  481598 cri.go:89] found id: ""
	I1216 04:38:42.751587  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.751594  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:42.751599  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:42.751660  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:42.777155  481598 cri.go:89] found id: ""
	I1216 04:38:42.777170  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.777177  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:42.777182  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:42.777253  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:42.802762  481598 cri.go:89] found id: ""
	I1216 04:38:42.802776  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.802783  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:42.802788  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:42.802847  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:42.828278  481598 cri.go:89] found id: ""
	I1216 04:38:42.828291  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.828299  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:42.828303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:42.828361  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:42.854186  481598 cri.go:89] found id: ""
	I1216 04:38:42.854211  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.854219  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:42.854224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:42.854281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:42.879809  481598 cri.go:89] found id: ""
	I1216 04:38:42.879822  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.879831  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:42.879839  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:42.879851  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:42.945305  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:42.945315  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:42.945326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:43.019176  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:43.019199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:43.048232  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:43.048248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:43.128355  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:43.128376  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.644135  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:45.654691  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:45.654750  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:45.682129  481598 cri.go:89] found id: ""
	I1216 04:38:45.682143  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.682151  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:45.682156  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:45.682216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:45.706956  481598 cri.go:89] found id: ""
	I1216 04:38:45.706970  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.706977  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:45.706981  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:45.707040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:45.732479  481598 cri.go:89] found id: ""
	I1216 04:38:45.732493  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.732500  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:45.732505  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:45.732563  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:45.757526  481598 cri.go:89] found id: ""
	I1216 04:38:45.757540  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.757547  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:45.757553  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:45.757610  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:45.787392  481598 cri.go:89] found id: ""
	I1216 04:38:45.787407  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.787414  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:45.787419  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:45.787481  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:45.817452  481598 cri.go:89] found id: ""
	I1216 04:38:45.817477  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.817484  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:45.817490  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:45.817549  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:45.843705  481598 cri.go:89] found id: ""
	I1216 04:38:45.843732  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.843744  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:45.843752  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:45.843762  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:45.909394  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:45.909415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.924650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:45.924667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:45.985242  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:45.985251  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:45.985262  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:46.060306  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:46.060333  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.603994  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:48.614177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:48.614238  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:48.639598  481598 cri.go:89] found id: ""
	I1216 04:38:48.639612  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.639620  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:48.639625  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:48.639685  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:48.668445  481598 cri.go:89] found id: ""
	I1216 04:38:48.668458  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.668465  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:48.668470  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:48.668525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:48.698321  481598 cri.go:89] found id: ""
	I1216 04:38:48.698336  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.698343  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:48.698348  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:48.698410  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:48.724272  481598 cri.go:89] found id: ""
	I1216 04:38:48.724286  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.724293  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:48.724298  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:48.724367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:48.748881  481598 cri.go:89] found id: ""
	I1216 04:38:48.748895  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.748902  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:48.748907  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:48.748965  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:48.773436  481598 cri.go:89] found id: ""
	I1216 04:38:48.773450  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.773456  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:48.773462  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:48.773518  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:48.798866  481598 cri.go:89] found id: ""
	I1216 04:38:48.798880  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.798887  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:48.798894  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:48.798904  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.830890  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:48.830906  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:48.897158  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:48.897179  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:48.912309  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:48.912326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:48.979282  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:48.979293  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:48.979304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:51.548916  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:51.559621  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:51.559691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:51.585186  481598 cri.go:89] found id: ""
	I1216 04:38:51.585201  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.585208  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:51.585214  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:51.585281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:51.610438  481598 cri.go:89] found id: ""
	I1216 04:38:51.610454  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.610462  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:51.610466  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:51.610523  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:51.640579  481598 cri.go:89] found id: ""
	I1216 04:38:51.640594  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.640601  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:51.640607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:51.640665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:51.667755  481598 cri.go:89] found id: ""
	I1216 04:38:51.667770  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.667778  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:51.667783  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:51.667840  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:51.697652  481598 cri.go:89] found id: ""
	I1216 04:38:51.697666  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.697673  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:51.697678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:51.697738  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:51.723173  481598 cri.go:89] found id: ""
	I1216 04:38:51.723188  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.723195  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:51.723200  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:51.723266  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:51.748836  481598 cri.go:89] found id: ""
	I1216 04:38:51.748851  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.748858  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:51.748865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:51.748876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:51.790045  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:51.790061  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:51.857688  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:51.857707  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:51.872771  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:51.872788  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:51.934401  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:51.934410  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:51.934420  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:54.502288  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:54.513093  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:54.513158  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:54.540102  481598 cri.go:89] found id: ""
	I1216 04:38:54.540116  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.540124  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:54.540129  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:54.540187  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:54.565581  481598 cri.go:89] found id: ""
	I1216 04:38:54.565597  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.565605  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:54.565609  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:54.565673  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:54.594141  481598 cri.go:89] found id: ""
	I1216 04:38:54.594155  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.594163  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:54.594167  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:54.594229  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:54.620437  481598 cri.go:89] found id: ""
	I1216 04:38:54.620451  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.620459  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:54.620464  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:54.620521  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:54.651777  481598 cri.go:89] found id: ""
	I1216 04:38:54.651792  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.651800  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:54.651805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:54.651862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:54.677522  481598 cri.go:89] found id: ""
	I1216 04:38:54.677536  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.677544  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:54.677549  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:54.677608  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:54.702758  481598 cri.go:89] found id: ""
	I1216 04:38:54.702774  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.702782  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:54.702789  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:54.702800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:54.731468  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:54.731485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:54.801713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:54.801732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:54.816784  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:54.816800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:54.890418  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:54.890428  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:54.890439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:57.462843  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:57.473005  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:57.473096  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:57.498656  481598 cri.go:89] found id: ""
	I1216 04:38:57.498670  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.498676  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:57.498682  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:57.498740  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:57.524589  481598 cri.go:89] found id: ""
	I1216 04:38:57.524604  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.524611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:57.524616  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:57.524683  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:57.549819  481598 cri.go:89] found id: ""
	I1216 04:38:57.549833  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.549844  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:57.549849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:57.549906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:57.580220  481598 cri.go:89] found id: ""
	I1216 04:38:57.580234  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.580241  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:57.580246  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:57.580303  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:57.605587  481598 cri.go:89] found id: ""
	I1216 04:38:57.605600  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.605607  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:57.605612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:57.605668  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:57.630691  481598 cri.go:89] found id: ""
	I1216 04:38:57.630706  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.630721  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:57.630726  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:57.630784  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:57.655557  481598 cri.go:89] found id: ""
	I1216 04:38:57.655571  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.655579  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:57.655588  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:57.655598  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:57.686872  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:57.686888  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:57.752402  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:57.752422  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:57.767423  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:57.767439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:57.831611  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:57.831621  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:57.831631  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.403298  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:00.416780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:00.416848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:00.445602  481598 cri.go:89] found id: ""
	I1216 04:39:00.445618  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.445626  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:00.445632  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:00.445698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:00.480454  481598 cri.go:89] found id: ""
	I1216 04:39:00.480470  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.480478  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:00.480483  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:00.480548  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:00.509654  481598 cri.go:89] found id: ""
	I1216 04:39:00.509669  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.509677  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:00.509682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:00.509746  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:00.539666  481598 cri.go:89] found id: ""
	I1216 04:39:00.539681  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.539688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:00.539694  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:00.539755  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:00.567301  481598 cri.go:89] found id: ""
	I1216 04:39:00.567316  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.567323  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:00.567328  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:00.567388  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:00.593431  481598 cri.go:89] found id: ""
	I1216 04:39:00.593446  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.593453  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:00.593458  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:00.593526  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:00.618713  481598 cri.go:89] found id: ""
	I1216 04:39:00.618728  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.618736  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:00.618743  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:00.618754  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:00.687858  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:00.687869  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:00.687880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.757046  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:00.757071  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:00.784949  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:00.784966  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:00.850312  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:00.850331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.365582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:03.376104  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:03.376164  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:03.402517  481598 cri.go:89] found id: ""
	I1216 04:39:03.402532  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.402539  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:03.402544  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:03.402605  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:03.428281  481598 cri.go:89] found id: ""
	I1216 04:39:03.428295  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.428302  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:03.428308  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:03.428365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:03.456250  481598 cri.go:89] found id: ""
	I1216 04:39:03.456267  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.456274  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:03.456280  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:03.456353  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:03.482051  481598 cri.go:89] found id: ""
	I1216 04:39:03.482064  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.482071  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:03.482077  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:03.482137  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:03.511578  481598 cri.go:89] found id: ""
	I1216 04:39:03.511594  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.511601  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:03.511606  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:03.511664  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:03.540839  481598 cri.go:89] found id: ""
	I1216 04:39:03.540853  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.540860  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:03.540866  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:03.540921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:03.567087  481598 cri.go:89] found id: ""
	I1216 04:39:03.567103  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.567111  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:03.567119  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:03.567131  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:03.633316  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:03.633338  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.648697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:03.648714  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:03.714118  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:03.714128  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:03.714140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:03.784197  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:03.784219  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.317384  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:06.328685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:06.328743  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:06.355872  481598 cri.go:89] found id: ""
	I1216 04:39:06.355887  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.355893  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:06.355907  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:06.355964  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:06.386605  481598 cri.go:89] found id: ""
	I1216 04:39:06.386619  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.386626  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:06.386631  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:06.386696  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:06.412102  481598 cri.go:89] found id: ""
	I1216 04:39:06.412117  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.412132  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:06.412137  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:06.412209  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:06.437654  481598 cri.go:89] found id: ""
	I1216 04:39:06.437669  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.437676  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:06.437681  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:06.437752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:06.466130  481598 cri.go:89] found id: ""
	I1216 04:39:06.466145  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.466151  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:06.466156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:06.466219  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:06.491149  481598 cri.go:89] found id: ""
	I1216 04:39:06.491163  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.491170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:06.491176  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:06.491236  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:06.517113  481598 cri.go:89] found id: ""
	I1216 04:39:06.517127  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.517134  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:06.517141  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:06.517165  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:06.532219  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:06.532236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:06.610459  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:06.610469  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:06.610480  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:06.678489  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:06.678509  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.713694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:06.713710  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.281978  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:09.291972  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:09.292040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:09.318986  481598 cri.go:89] found id: ""
	I1216 04:39:09.319002  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.319009  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:09.319014  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:09.319080  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:09.355810  481598 cri.go:89] found id: ""
	I1216 04:39:09.355823  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.355848  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:09.355853  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:09.355917  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:09.386910  481598 cri.go:89] found id: ""
	I1216 04:39:09.386939  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.386946  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:09.386951  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:09.387019  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:09.415820  481598 cri.go:89] found id: ""
	I1216 04:39:09.415834  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.415841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:09.415846  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:09.415902  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:09.441866  481598 cri.go:89] found id: ""
	I1216 04:39:09.441881  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.441888  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:09.441892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:09.441956  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:09.467703  481598 cri.go:89] found id: ""
	I1216 04:39:09.467718  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.467724  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:09.467730  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:09.467790  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:09.494307  481598 cri.go:89] found id: ""
	I1216 04:39:09.494322  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.494329  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:09.494336  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:09.494346  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:09.521531  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:09.521549  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.587441  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:09.587464  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:09.602275  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:09.602291  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:09.664727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:09.664737  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:09.664748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.233947  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:12.245865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:12.245923  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:12.270410  481598 cri.go:89] found id: ""
	I1216 04:39:12.270425  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.270431  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:12.270437  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:12.270513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:12.295309  481598 cri.go:89] found id: ""
	I1216 04:39:12.295323  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.295330  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:12.295334  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:12.295391  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:12.326327  481598 cri.go:89] found id: ""
	I1216 04:39:12.326342  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.326349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:12.326354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:12.326415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:12.358181  481598 cri.go:89] found id: ""
	I1216 04:39:12.358196  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.358203  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:12.358208  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:12.358309  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:12.390281  481598 cri.go:89] found id: ""
	I1216 04:39:12.390296  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.390303  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:12.390308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:12.390365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:12.419429  481598 cri.go:89] found id: ""
	I1216 04:39:12.419444  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.419451  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:12.419456  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:12.419512  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:12.445137  481598 cri.go:89] found id: ""
	I1216 04:39:12.445151  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.445159  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:12.445167  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:12.445177  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:12.510786  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:12.510805  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:12.525785  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:12.525801  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:12.590602  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:12.590616  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:12.590627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.664304  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:12.664331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:15.192618  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:15.202786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:15.202855  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:15.227787  481598 cri.go:89] found id: ""
	I1216 04:39:15.227801  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.227808  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:15.227813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:15.227875  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:15.254490  481598 cri.go:89] found id: ""
	I1216 04:39:15.254505  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.254512  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:15.254517  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:15.254578  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:15.280037  481598 cri.go:89] found id: ""
	I1216 04:39:15.280052  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.280060  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:15.280064  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:15.280124  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:15.306278  481598 cri.go:89] found id: ""
	I1216 04:39:15.306295  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.306303  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:15.306308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:15.306368  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:15.338132  481598 cri.go:89] found id: ""
	I1216 04:39:15.338146  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.338152  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:15.338157  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:15.338215  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:15.365557  481598 cri.go:89] found id: ""
	I1216 04:39:15.365571  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.365578  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:15.365583  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:15.365640  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:15.394440  481598 cri.go:89] found id: ""
	I1216 04:39:15.394454  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.394461  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:15.394469  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:15.394478  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:15.460219  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:15.460240  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:15.475344  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:15.475362  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:15.543524  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:15.543542  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:15.543552  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:15.611736  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:15.611757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:18.147208  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:18.157570  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:18.157629  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:18.182325  481598 cri.go:89] found id: ""
	I1216 04:39:18.182339  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.182346  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:18.182351  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:18.182409  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:18.211344  481598 cri.go:89] found id: ""
	I1216 04:39:18.211358  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.211365  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:18.211370  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:18.211430  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:18.236501  481598 cri.go:89] found id: ""
	I1216 04:39:18.236518  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.236525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:18.236533  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:18.236600  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:18.261000  481598 cri.go:89] found id: ""
	I1216 04:39:18.261013  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.261020  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:18.261025  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:18.261112  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:18.286887  481598 cri.go:89] found id: ""
	I1216 04:39:18.286901  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.286908  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:18.286913  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:18.286970  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:18.311492  481598 cri.go:89] found id: ""
	I1216 04:39:18.311506  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.311514  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:18.311519  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:18.311577  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:18.350624  481598 cri.go:89] found id: ""
	I1216 04:39:18.350638  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.350645  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:18.350652  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:18.350663  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:18.424437  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:18.424461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:18.439409  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:18.439425  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:18.503408  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:18.503426  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:18.503439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:18.572236  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:18.572256  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.099923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:21.109895  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:21.109959  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:21.135095  481598 cri.go:89] found id: ""
	I1216 04:39:21.135110  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.135117  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:21.135122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:21.135188  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:21.159978  481598 cri.go:89] found id: ""
	I1216 04:39:21.159991  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.159998  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:21.160002  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:21.160060  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:21.184861  481598 cri.go:89] found id: ""
	I1216 04:39:21.184875  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.184882  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:21.184887  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:21.184943  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:21.215362  481598 cri.go:89] found id: ""
	I1216 04:39:21.215376  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.215383  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:21.215388  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:21.215451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:21.241352  481598 cri.go:89] found id: ""
	I1216 04:39:21.241366  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.241373  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:21.241378  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:21.241435  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:21.270124  481598 cri.go:89] found id: ""
	I1216 04:39:21.270139  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.270146  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:21.270151  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:21.270210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:21.294836  481598 cri.go:89] found id: ""
	I1216 04:39:21.294850  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.294857  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:21.294865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:21.294876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.340249  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:21.340265  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:21.415950  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:21.415975  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:21.431603  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:21.431619  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:21.496240  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:21.496250  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:21.496260  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.064476  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:24.075218  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:24.075282  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:24.100790  481598 cri.go:89] found id: ""
	I1216 04:39:24.100804  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.100810  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:24.100815  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:24.100870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:24.127285  481598 cri.go:89] found id: ""
	I1216 04:39:24.127301  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.127308  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:24.127312  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:24.127371  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:24.156427  481598 cri.go:89] found id: ""
	I1216 04:39:24.156440  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.156447  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:24.156452  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:24.156513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:24.182130  481598 cri.go:89] found id: ""
	I1216 04:39:24.182146  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.182154  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:24.182159  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:24.182216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:24.207363  481598 cri.go:89] found id: ""
	I1216 04:39:24.207378  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.207385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:24.207390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:24.207451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:24.235986  481598 cri.go:89] found id: ""
	I1216 04:39:24.236001  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.236017  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:24.236022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:24.236077  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:24.260561  481598 cri.go:89] found id: ""
	I1216 04:39:24.260582  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.260589  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:24.260597  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:24.260608  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.328717  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:24.328738  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:24.362340  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:24.362357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:24.435463  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:24.435483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:24.452196  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:24.452212  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:24.517484  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.018375  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:27.028921  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:27.028982  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:27.058968  481598 cri.go:89] found id: ""
	I1216 04:39:27.058984  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.058991  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:27.058996  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:27.059058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:27.086788  481598 cri.go:89] found id: ""
	I1216 04:39:27.086802  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.086808  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:27.086815  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:27.086872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:27.111593  481598 cri.go:89] found id: ""
	I1216 04:39:27.111607  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.111629  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:27.111635  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:27.111700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:27.135786  481598 cri.go:89] found id: ""
	I1216 04:39:27.135800  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.135816  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:27.135822  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:27.135881  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:27.175564  481598 cri.go:89] found id: ""
	I1216 04:39:27.175577  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.175593  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:27.175598  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:27.175670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:27.201020  481598 cri.go:89] found id: ""
	I1216 04:39:27.201034  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.201041  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:27.201048  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:27.201123  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:27.226608  481598 cri.go:89] found id: ""
	I1216 04:39:27.226622  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.226629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:27.226637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:27.226648  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:27.292121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:27.292140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:27.307824  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:27.307840  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:27.382707  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.382717  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:27.382728  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:27.450745  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:27.450764  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:29.981824  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:29.991752  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:29.991812  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:30.027720  481598 cri.go:89] found id: ""
	I1216 04:39:30.027737  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.027744  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:30.027749  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:30.027824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:30.064834  481598 cri.go:89] found id: ""
	I1216 04:39:30.064862  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.064869  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:30.064875  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:30.064942  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:30.092327  481598 cri.go:89] found id: ""
	I1216 04:39:30.092341  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.092349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:30.092354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:30.092415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:30.119568  481598 cri.go:89] found id: ""
	I1216 04:39:30.119583  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.119590  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:30.119595  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:30.119654  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:30.145948  481598 cri.go:89] found id: ""
	I1216 04:39:30.145962  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.145970  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:30.145974  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:30.146037  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:30.174055  481598 cri.go:89] found id: ""
	I1216 04:39:30.174069  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.174077  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:30.174082  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:30.174148  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:30.200676  481598 cri.go:89] found id: ""
	I1216 04:39:30.200704  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.200711  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:30.200719  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:30.200729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:30.273177  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:30.273199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:30.307730  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:30.307749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:30.380128  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:30.380149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:30.398650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:30.398668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:30.464666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:32.965244  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:32.975770  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:32.975829  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:33.008069  481598 cri.go:89] found id: ""
	I1216 04:39:33.008086  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.008094  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:33.008099  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:33.008180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:33.035228  481598 cri.go:89] found id: ""
	I1216 04:39:33.035242  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.035249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:33.035254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:33.035319  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:33.062504  481598 cri.go:89] found id: ""
	I1216 04:39:33.062518  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.062525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:33.062530  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:33.062588  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:33.088441  481598 cri.go:89] found id: ""
	I1216 04:39:33.088455  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.088462  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:33.088467  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:33.088529  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:33.119260  481598 cri.go:89] found id: ""
	I1216 04:39:33.119274  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.119281  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:33.119286  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:33.119346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:33.150552  481598 cri.go:89] found id: ""
	I1216 04:39:33.150567  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.150575  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:33.150580  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:33.150644  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:33.180001  481598 cri.go:89] found id: ""
	I1216 04:39:33.180016  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.180023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:33.180030  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:33.180040  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:33.248727  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:33.248752  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:33.277683  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:33.277700  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:33.350702  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:33.350721  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:33.369208  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:33.369248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:33.439765  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:35.940031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:35.950049  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:35.950107  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:35.975196  481598 cri.go:89] found id: ""
	I1216 04:39:35.975209  481598 logs.go:282] 0 containers: []
	W1216 04:39:35.975216  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:35.975221  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:35.975277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:36.001797  481598 cri.go:89] found id: ""
	I1216 04:39:36.001812  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.001820  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:36.001826  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:36.001890  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:36.036431  481598 cri.go:89] found id: ""
	I1216 04:39:36.036446  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.036454  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:36.036459  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:36.036525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:36.063963  481598 cri.go:89] found id: ""
	I1216 04:39:36.063978  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.063985  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:36.063990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:36.064048  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:36.090639  481598 cri.go:89] found id: ""
	I1216 04:39:36.090653  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.090660  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:36.090665  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:36.090724  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:36.116793  481598 cri.go:89] found id: ""
	I1216 04:39:36.116807  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.116816  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:36.116821  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:36.116880  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:36.141959  481598 cri.go:89] found id: ""
	I1216 04:39:36.141972  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.141979  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:36.141986  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:36.141996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:36.208976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:36.208996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:36.239530  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:36.239546  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:36.305220  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:36.305245  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:36.322139  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:36.322169  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:36.399936  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:38.900194  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:38.910569  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:38.910632  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:38.936840  481598 cri.go:89] found id: ""
	I1216 04:39:38.936854  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.936861  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:38.936867  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:38.936926  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:38.969994  481598 cri.go:89] found id: ""
	I1216 04:39:38.970008  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.970016  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:38.970021  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:38.970092  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:39.000246  481598 cri.go:89] found id: ""
	I1216 04:39:39.000260  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.000267  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:39.000272  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:39.000328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:39.028053  481598 cri.go:89] found id: ""
	I1216 04:39:39.028068  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.028075  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:39.028080  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:39.028139  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:39.053044  481598 cri.go:89] found id: ""
	I1216 04:39:39.053058  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.053100  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:39.053107  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:39.053165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:39.078212  481598 cri.go:89] found id: ""
	I1216 04:39:39.078226  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.078234  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:39.078239  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:39.078296  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:39.103968  481598 cri.go:89] found id: ""
	I1216 04:39:39.103982  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.103994  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:39.104001  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:39.104011  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:39.171261  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:39.171283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:39.203918  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:39.203937  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:39.269162  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:39.269183  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:39.283640  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:39.283658  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:39.357490  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:41.857783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:41.868156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:41.868218  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:41.896097  481598 cri.go:89] found id: ""
	I1216 04:39:41.896111  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.896118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:41.896123  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:41.896183  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:41.923730  481598 cri.go:89] found id: ""
	I1216 04:39:41.923745  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.923752  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:41.923758  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:41.923814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:41.948996  481598 cri.go:89] found id: ""
	I1216 04:39:41.949010  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.949017  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:41.949022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:41.949098  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:41.973820  481598 cri.go:89] found id: ""
	I1216 04:39:41.973834  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.973841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:41.973845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:41.973901  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:41.999809  481598 cri.go:89] found id: ""
	I1216 04:39:41.999832  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.999839  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:41.999845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:41.999910  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:42.032190  481598 cri.go:89] found id: ""
	I1216 04:39:42.032216  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.032224  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:42.032229  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:42.032301  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:42.059655  481598 cri.go:89] found id: ""
	I1216 04:39:42.059679  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.059687  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:42.059694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:42.059705  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:42.127853  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:42.127875  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:42.146370  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:42.146393  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:42.223415  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:42.223444  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:42.223457  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:42.304338  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:42.304368  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:44.847911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:44.858741  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:44.858820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:44.884095  481598 cri.go:89] found id: ""
	I1216 04:39:44.884110  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.884118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:44.884122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:44.884181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:44.911877  481598 cri.go:89] found id: ""
	I1216 04:39:44.911891  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.911898  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:44.911902  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:44.911960  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:44.938117  481598 cri.go:89] found id: ""
	I1216 04:39:44.938132  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.938139  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:44.938144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:44.938204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:44.972779  481598 cri.go:89] found id: ""
	I1216 04:39:44.972793  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.972800  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:44.972805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:44.972862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:45.000033  481598 cri.go:89] found id: ""
	I1216 04:39:45.000047  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.000054  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:45.000060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:45.000121  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:45.072214  481598 cri.go:89] found id: ""
	I1216 04:39:45.072234  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.072244  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:45.072250  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:45.072325  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:45.112612  481598 cri.go:89] found id: ""
	I1216 04:39:45.112632  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.112641  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:45.112653  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:45.112668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:45.193381  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:45.193407  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:45.244205  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:45.244225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:45.324983  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:45.325004  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:45.340857  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:45.340880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:45.423270  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:47.923526  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:47.933779  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:47.933853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:47.960777  481598 cri.go:89] found id: ""
	I1216 04:39:47.960793  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.960800  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:47.960804  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:47.960863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:47.990010  481598 cri.go:89] found id: ""
	I1216 04:39:47.990024  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.990031  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:47.990036  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:47.990094  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:48.021881  481598 cri.go:89] found id: ""
	I1216 04:39:48.021897  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.021908  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:48.021914  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:48.021978  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:48.048841  481598 cri.go:89] found id: ""
	I1216 04:39:48.048860  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.048867  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:48.048872  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:48.048947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:48.074988  481598 cri.go:89] found id: ""
	I1216 04:39:48.075002  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.075010  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:48.075015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:48.075073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:48.101288  481598 cri.go:89] found id: ""
	I1216 04:39:48.101303  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.101320  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:48.101325  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:48.101383  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:48.126469  481598 cri.go:89] found id: ""
	I1216 04:39:48.126483  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.126489  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:48.126497  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:48.126508  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:48.160206  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:48.160222  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:48.226864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:48.226883  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:48.241861  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:48.241879  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:48.311183  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:48.311197  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:48.311208  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:50.890106  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:50.900561  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:50.900623  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:50.925477  481598 cri.go:89] found id: ""
	I1216 04:39:50.925491  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.925498  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:50.925503  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:50.925573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:50.950590  481598 cri.go:89] found id: ""
	I1216 04:39:50.950604  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.950611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:50.950615  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:50.950670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:50.975563  481598 cri.go:89] found id: ""
	I1216 04:39:50.975577  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.975584  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:50.975588  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:50.975649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:51.001446  481598 cri.go:89] found id: ""
	I1216 04:39:51.001460  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.001468  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:51.001473  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:51.001546  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:51.036808  481598 cri.go:89] found id: ""
	I1216 04:39:51.036822  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.036830  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:51.036834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:51.036893  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:51.063122  481598 cri.go:89] found id: ""
	I1216 04:39:51.063136  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.063143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:51.063148  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:51.063204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:51.091909  481598 cri.go:89] found id: ""
	I1216 04:39:51.091924  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.091931  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:51.091938  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:51.091949  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:51.157330  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:51.157357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:51.172521  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:51.172537  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:51.237104  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:51.237115  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:51.237126  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:51.310463  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:51.310484  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:53.856519  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:53.866849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:53.866907  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:53.892183  481598 cri.go:89] found id: ""
	I1216 04:39:53.892197  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.892204  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:53.892210  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:53.892269  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:53.917961  481598 cri.go:89] found id: ""
	I1216 04:39:53.917975  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.917983  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:53.917987  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:53.918046  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:53.943214  481598 cri.go:89] found id: ""
	I1216 04:39:53.943228  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.943235  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:53.943240  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:53.943298  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:53.968696  481598 cri.go:89] found id: ""
	I1216 04:39:53.968710  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.968717  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:53.968722  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:53.968778  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:53.993878  481598 cri.go:89] found id: ""
	I1216 04:39:53.993892  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.993900  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:53.993905  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:53.993961  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:54.021892  481598 cri.go:89] found id: ""
	I1216 04:39:54.021911  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.021918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:54.021924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:54.021989  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:54.048339  481598 cri.go:89] found id: ""
	I1216 04:39:54.048353  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.048360  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:54.048368  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:54.048379  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:54.115518  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:54.115529  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:54.115540  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:54.184110  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:54.184130  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:54.212611  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:54.212627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:54.280294  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:54.280314  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:56.795621  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:56.805834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:56.805904  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:56.831835  481598 cri.go:89] found id: ""
	I1216 04:39:56.831850  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.831857  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:56.831862  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:56.831920  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:56.857986  481598 cri.go:89] found id: ""
	I1216 04:39:56.858000  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.858007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:56.858012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:56.858086  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:56.884049  481598 cri.go:89] found id: ""
	I1216 04:39:56.884062  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.884069  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:56.884074  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:56.884129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:56.909467  481598 cri.go:89] found id: ""
	I1216 04:39:56.909481  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.909488  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:56.909493  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:56.909553  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:56.935361  481598 cri.go:89] found id: ""
	I1216 04:39:56.935375  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.935382  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:56.935387  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:56.935444  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:56.963724  481598 cri.go:89] found id: ""
	I1216 04:39:56.963738  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.963745  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:56.963750  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:56.963807  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:56.988482  481598 cri.go:89] found id: ""
	I1216 04:39:56.988495  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.988502  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:56.988510  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:56.988520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:57.057566  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:57.057587  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:57.073142  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:57.073160  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:57.138961  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:57.138972  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:57.138983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:57.206475  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:57.206497  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:59.739022  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:59.749638  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:59.749700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:59.776094  481598 cri.go:89] found id: ""
	I1216 04:39:59.776109  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.776115  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:59.776120  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:59.776180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:59.802606  481598 cri.go:89] found id: ""
	I1216 04:39:59.802621  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.802628  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:59.802634  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:59.802697  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:59.829710  481598 cri.go:89] found id: ""
	I1216 04:39:59.829724  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.829731  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:59.829736  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:59.829808  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:59.859658  481598 cri.go:89] found id: ""
	I1216 04:39:59.859673  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.859680  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:59.859685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:59.859742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:59.884817  481598 cri.go:89] found id: ""
	I1216 04:39:59.884831  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.884838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:59.884843  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:59.884906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:59.911195  481598 cri.go:89] found id: ""
	I1216 04:39:59.911210  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.911217  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:59.911223  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:59.911283  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:59.936870  481598 cri.go:89] found id: ""
	I1216 04:39:59.936885  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.936891  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:59.936899  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:59.936909  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:00.003032  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:00.003054  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:00.086753  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:00.086772  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:00.242338  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:00.242351  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:00.242395  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:00.380976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:00.381000  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:02.964729  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:02.974990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:02.975051  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:03.001443  481598 cri.go:89] found id: ""
	I1216 04:40:03.001458  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.001466  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:03.001471  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:03.001538  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:03.030227  481598 cri.go:89] found id: ""
	I1216 04:40:03.030241  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.030249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:03.030254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:03.030315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:03.056406  481598 cri.go:89] found id: ""
	I1216 04:40:03.056421  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.056429  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:03.056439  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:03.056500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:03.084430  481598 cri.go:89] found id: ""
	I1216 04:40:03.084452  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.084460  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:03.084465  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:03.084527  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:03.112058  481598 cri.go:89] found id: ""
	I1216 04:40:03.112072  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.112079  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:03.112084  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:03.112150  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:03.139147  481598 cri.go:89] found id: ""
	I1216 04:40:03.139161  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.139168  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:03.139173  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:03.139231  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:03.170943  481598 cri.go:89] found id: ""
	I1216 04:40:03.170958  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.170965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:03.170973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:03.170984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:03.237388  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:03.237409  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:03.252191  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:03.252213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:03.315123  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:03.315132  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:03.315143  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:03.388848  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:03.388869  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:05.923315  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:05.934216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:05.934292  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:05.964778  481598 cri.go:89] found id: ""
	I1216 04:40:05.964791  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.964798  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:05.964813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:05.964895  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:05.991403  481598 cri.go:89] found id: ""
	I1216 04:40:05.991417  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.991424  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:05.991429  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:05.991486  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:06.019838  481598 cri.go:89] found id: ""
	I1216 04:40:06.019853  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.019860  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:06.019865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:06.019927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:06.046554  481598 cri.go:89] found id: ""
	I1216 04:40:06.046569  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.046580  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:06.046585  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:06.046649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:06.071958  481598 cri.go:89] found id: ""
	I1216 04:40:06.071973  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.071980  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:06.071985  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:06.072040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:06.099079  481598 cri.go:89] found id: ""
	I1216 04:40:06.099094  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.099101  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:06.099106  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:06.099170  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:06.126168  481598 cri.go:89] found id: ""
	I1216 04:40:06.126188  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.126195  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:06.126202  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:06.126213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:06.192591  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:06.192611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:06.207708  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:06.207729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:06.274064  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:06.274074  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:06.274086  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:06.343044  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:06.343066  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:08.873218  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:08.883654  481598 kubeadm.go:602] duration metric: took 4m3.325303057s to restartPrimaryControlPlane
	W1216 04:40:08.883714  481598 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 04:40:08.883788  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:40:09.294329  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:40:09.307484  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:40:09.315713  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:40:09.315769  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:40:09.323612  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:40:09.323622  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:40:09.323675  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:40:09.331783  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:40:09.331838  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:40:09.339284  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:40:09.346837  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:40:09.346891  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:40:09.354493  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.362269  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:40:09.362328  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.369970  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:40:09.378044  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:40:09.378103  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:40:09.385765  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:40:09.424060  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:40:09.424358  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:40:09.495076  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:40:09.495141  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:40:09.495181  481598 kubeadm.go:319] OS: Linux
	I1216 04:40:09.495224  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:40:09.495271  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:40:09.495318  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:40:09.495365  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:40:09.495412  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:40:09.495459  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:40:09.495502  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:40:09.495550  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:40:09.495596  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:40:09.563458  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:40:09.563582  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:40:09.563682  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:40:09.571744  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:40:09.577424  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:40:09.577526  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:40:09.577597  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:40:09.577679  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:40:09.577744  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:40:09.577819  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:40:09.577878  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:40:09.577951  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:40:09.578022  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:40:09.578105  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:40:09.578188  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:40:09.578235  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:40:09.578291  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:40:09.899760  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:40:10.102481  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:40:10.266020  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:40:10.669469  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:40:11.526452  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:40:11.527018  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:40:11.530635  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:40:11.533764  481598 out.go:252]   - Booting up control plane ...
	I1216 04:40:11.533860  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:40:11.533937  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:40:11.534462  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:40:11.549423  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:40:11.549689  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:40:11.557342  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:40:11.557601  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:40:11.557642  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:40:11.689632  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:40:11.689752  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:44:11.687962  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001213504s
	I1216 04:44:11.687985  481598 kubeadm.go:319] 
	I1216 04:44:11.688045  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:44:11.688077  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:44:11.688181  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:44:11.688185  481598 kubeadm.go:319] 
	I1216 04:44:11.688293  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:44:11.688324  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:44:11.688354  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:44:11.688357  481598 kubeadm.go:319] 
	I1216 04:44:11.693131  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:11.693558  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:11.693669  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:44:11.693904  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:44:11.693910  481598 kubeadm.go:319] 
	I1216 04:44:11.693977  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:44:11.694089  481598 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001213504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:44:11.694190  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:44:12.104466  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:44:12.116829  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:44:12.116881  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:44:12.124364  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:44:12.124372  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:44:12.124420  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:44:12.131751  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:44:12.131807  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:44:12.138938  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:44:12.146429  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:44:12.146482  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:44:12.153782  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.161218  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:44:12.161270  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.168781  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:44:12.176219  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:44:12.176271  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:44:12.183435  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:44:12.295783  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:12.296200  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:12.361811  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:48:14.074988  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 04:48:14.075012  481598 kubeadm.go:319] 
	I1216 04:48:14.075081  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:48:14.079141  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:48:14.079195  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:48:14.079284  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:48:14.079338  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:48:14.079372  481598 kubeadm.go:319] OS: Linux
	I1216 04:48:14.079416  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:48:14.079463  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:48:14.079508  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:48:14.079555  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:48:14.079602  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:48:14.079664  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:48:14.079709  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:48:14.079755  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:48:14.079801  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:48:14.079872  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:48:14.079966  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:48:14.080055  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:48:14.080117  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:48:14.083166  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:48:14.083255  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:48:14.083327  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:48:14.083402  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:48:14.083461  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:48:14.083529  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:48:14.083582  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:48:14.083644  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:48:14.083704  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:48:14.083778  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:48:14.083849  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:48:14.083886  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:48:14.083941  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:48:14.083991  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:48:14.084046  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:48:14.084103  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:48:14.084165  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:48:14.084218  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:48:14.084301  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:48:14.084366  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:48:14.087214  481598 out.go:252]   - Booting up control plane ...
	I1216 04:48:14.087326  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:48:14.087404  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:48:14.087497  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:48:14.087610  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:48:14.087707  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:48:14.087811  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:48:14.087895  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:48:14.087932  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:48:14.088082  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:48:14.088189  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:48:14.088268  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077674s
	I1216 04:48:14.088271  481598 kubeadm.go:319] 
	I1216 04:48:14.088334  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:48:14.088366  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:48:14.088482  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:48:14.088486  481598 kubeadm.go:319] 
	I1216 04:48:14.088595  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:48:14.088637  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:48:14.088668  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:48:14.088677  481598 kubeadm.go:319] 
	I1216 04:48:14.088733  481598 kubeadm.go:403] duration metric: took 12m8.569239535s to StartCluster
	I1216 04:48:14.088763  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:48:14.088824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:48:14.121113  481598 cri.go:89] found id: ""
	I1216 04:48:14.121140  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.121148  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:48:14.121153  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:48:14.121210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:48:14.150916  481598 cri.go:89] found id: ""
	I1216 04:48:14.150931  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.150938  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:48:14.150943  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:48:14.151005  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:48:14.177693  481598 cri.go:89] found id: ""
	I1216 04:48:14.177709  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.177716  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:48:14.177721  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:48:14.177782  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:48:14.202900  481598 cri.go:89] found id: ""
	I1216 04:48:14.202914  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.202921  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:48:14.202926  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:48:14.202983  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:48:14.229346  481598 cri.go:89] found id: ""
	I1216 04:48:14.229360  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.229367  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:48:14.229372  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:48:14.229433  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:48:14.255869  481598 cri.go:89] found id: ""
	I1216 04:48:14.255884  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.255891  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:48:14.255896  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:48:14.255953  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:48:14.282757  481598 cri.go:89] found id: ""
	I1216 04:48:14.282772  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.282779  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:48:14.282787  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:48:14.282797  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:48:14.349482  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:48:14.349503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:48:14.364748  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:48:14.364765  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:48:14.440728  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:48:14.440741  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:48:14.440751  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:48:14.515072  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:48:14.515092  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:48:14.544694  481598 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:48:14.544736  481598 out.go:285] * 
	W1216 04:48:14.544844  481598 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.544900  481598 out.go:285] * 
	W1216 04:48:14.547108  481598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:48:14.553105  481598 out.go:203] 
	W1216 04:48:14.555966  481598 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.556016  481598 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:48:14.556038  481598 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:48:14.559052  481598 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714709668Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.71475743Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714823679Z" level=info msg="Create NRI interface"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714952197Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714978487Z" level=info msg="runtime interface created"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714994996Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715003956Z" level=info msg="runtime interface starting up..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715015205Z" level=info msg="starting plugins..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715027849Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715097331Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:36:03 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.566937768Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5b381738-c32a-40c6-affb-c4aad9d726b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.567803155Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=7302f23d-29b3-4ddc-ad63-9af170663562 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.568336568Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=470a4814-2c77-4f21-97ca-d4b2d8b367c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.56886276Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3d63019-6956-4b8d-9795-5e45ed470016 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.569572699Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1715eb88-0ece-47e1-8cf4-08ec329b9548 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570118822Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=17ac1632-ceef-4623-82d4-95709ece00f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570664255Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=9e736680-8e53-4709-9714-232fbfa617ef name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.365457664Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=66aba16f-2286-4957-9589-3f6b308f0653 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366373784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a0b09546-fe1b-440e-8076-598a1e2930d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366892723Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=976ba277-fbb2-4db1-8ee0-ce87f329b2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367464412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15d708f7-0c1f-4e61-bde7-afc75b1dc430 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367935941Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d28f296-8f48-4bb2-bf27-13281f9a3b27 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368429435Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=82541142-23b6-4f48-816e-5b740356cd35 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368875848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=29b0dee6-8ec8-4ecc-822d-bf19bcc0e034 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:48:17.948392   21376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:17.948814   21376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:17.950517   21376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:17.950996   21376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:17.952477   21376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:48:17 up  3:30,  0 user,  load average: 0.27, 0.21, 0.46
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:48:15 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:48:16 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 16 04:48:16 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:16 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:16 functional-763073 kubelet[21251]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:16 functional-763073 kubelet[21251]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:16 functional-763073 kubelet[21251]: E1216 04:48:16.393802   21251 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:48:16 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:48:16 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:48:17 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 16 04:48:17 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:17 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:17 functional-763073 kubelet[21285]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:17 functional-763073 kubelet[21285]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:17 functional-763073 kubelet[21285]: E1216 04:48:17.069794   21285 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:48:17 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:48:17 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:48:17 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 16 04:48:17 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:17 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:48:17 functional-763073 kubelet[21357]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:17 functional-763073 kubelet[21357]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:48:17 functional-763073 kubelet[21357]: E1216 04:48:17.864175   21357 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:48:17 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:48:17 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (378.625335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-763073 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-763073 apply -f testdata/invalidsvc.yaml: exit status 1 (76.480429ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-763073 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-763073 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-763073 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-763073 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-763073 --alsologtostderr -v=1] stderr:
I1216 04:50:19.552685  498909 out.go:360] Setting OutFile to fd 1 ...
I1216 04:50:19.552829  498909 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:19.552850  498909 out.go:374] Setting ErrFile to fd 2...
I1216 04:50:19.552866  498909 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:19.553288  498909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:50:19.553635  498909 mustload.go:66] Loading cluster: functional-763073
I1216 04:50:19.554369  498909 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:19.555069  498909 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:50:19.573482  498909 host.go:66] Checking if "functional-763073" exists ...
I1216 04:50:19.573812  498909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 04:50:19.628480  498909 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.619243916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 04:50:19.628611  498909 api_server.go:166] Checking apiserver status ...
I1216 04:50:19.628681  498909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 04:50:19.628729  498909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:50:19.646019  498909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
W1216 04:50:19.742966  498909 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1216 04:50:19.746226  498909 out.go:179] * The control-plane node functional-763073 apiserver is not running: (state=Stopped)
I1216 04:50:19.749100  498909 out.go:179]   To start a cluster, run: "minikube start -p functional-763073"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (342.698637ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-763073 service hello-node --url                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount     │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001:/mount-9p --alsologtostderr -v=1              │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh -- ls -la /mount-9p                                                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh cat /mount-9p/test-1765860609398103848                                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh sudo umount -f /mount-9p                                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ mount     │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3173719408/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh -- ls -la /mount-9p                                                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh sudo umount -f /mount-9p                                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount     │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount1 --alsologtostderr -v=1                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount     │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount2 --alsologtostderr -v=1                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount     │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount3 --alsologtostderr -v=1                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount1                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount1                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh findmnt -T /mount2                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh findmnt -T /mount3                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ mount     │ -p functional-763073 --kill=true                                                                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ start     │ -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ start     │ -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ start     │ -p functional-763073 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-763073 --alsologtostderr -v=1                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:50:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:50:19.275087  498832 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:50:19.275230  498832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.275256  498832 out.go:374] Setting ErrFile to fd 2...
	I1216 04:50:19.275275  498832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.275561  498832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:50:19.275974  498832 out.go:368] Setting JSON to false
	I1216 04:50:19.276868  498832 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12766,"bootTime":1765847854,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:50:19.276969  498832 start.go:143] virtualization:  
	I1216 04:50:19.280401  498832 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:50:19.283297  498832 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:50:19.283468  498832 notify.go:221] Checking for updates...
	I1216 04:50:19.289162  498832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:50:19.292148  498832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:50:19.295134  498832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:50:19.297944  498832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:50:19.300988  498832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:50:19.304379  498832 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:50:19.305004  498832 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:50:19.352771  498832 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:50:19.352964  498832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.426222  498832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.416940994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.426334  498832 docker.go:319] overlay module found
	I1216 04:50:19.429570  498832 out.go:179] * Using the docker driver based on existing profile
	I1216 04:50:19.432360  498832 start.go:309] selected driver: docker
	I1216 04:50:19.432375  498832 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.432474  498832 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:50:19.432581  498832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.497183  498832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.487722858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.497651  498832 cni.go:84] Creating CNI manager for ""
	I1216 04:50:19.497714  498832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:50:19.497750  498832 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.500776  498832 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714709668Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.71475743Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714823679Z" level=info msg="Create NRI interface"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714952197Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714978487Z" level=info msg="runtime interface created"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714994996Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715003956Z" level=info msg="runtime interface starting up..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715015205Z" level=info msg="starting plugins..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715027849Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715097331Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:36:03 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.566937768Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5b381738-c32a-40c6-affb-c4aad9d726b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.567803155Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=7302f23d-29b3-4ddc-ad63-9af170663562 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.568336568Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=470a4814-2c77-4f21-97ca-d4b2d8b367c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.56886276Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3d63019-6956-4b8d-9795-5e45ed470016 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.569572699Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1715eb88-0ece-47e1-8cf4-08ec329b9548 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570118822Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=17ac1632-ceef-4623-82d4-95709ece00f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570664255Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=9e736680-8e53-4709-9714-232fbfa617ef name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.365457664Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=66aba16f-2286-4957-9589-3f6b308f0653 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366373784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a0b09546-fe1b-440e-8076-598a1e2930d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366892723Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=976ba277-fbb2-4db1-8ee0-ce87f329b2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367464412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15d708f7-0c1f-4e61-bde7-afc75b1dc430 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367935941Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d28f296-8f48-4bb2-bf27-13281f9a3b27 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368429435Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=82541142-23b6-4f48-816e-5b740356cd35 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368875848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=29b0dee6-8ec8-4ecc-822d-bf19bcc0e034 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:50:20.817176   23427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:20.818088   23427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:20.819997   23427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:20.820800   23427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:20.821772   23427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:50:20 up  3:32,  0 user,  load average: 1.59, 0.58, 0.56
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:50:18 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:19 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1128.
	Dec 16 04:50:19 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:19 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:19 functional-763073 kubelet[23313]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:19 functional-763073 kubelet[23313]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:19 functional-763073 kubelet[23313]: E1216 04:50:19.389382   23313 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:19 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:19 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:20 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 16 04:50:20 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:20 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:20 functional-763073 kubelet[23333]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:20 functional-763073 kubelet[23333]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:20 functional-763073 kubelet[23333]: E1216 04:50:20.113884   23333 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:20 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:20 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:20 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 16 04:50:20 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:20 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:20 functional-763073 kubelet[23432]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:20 functional-763073 kubelet[23432]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:20 functional-763073 kubelet[23432]: E1216 04:50:20.882477   23432 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:20 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:20 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (312.46887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 status: exit status 2 (311.919842ms)

                                                
                                                
-- stdout --
	functional-763073
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-763073 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (338.758069ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-763073 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 status -o json: exit status 2 (313.612963ms)

                                                
                                                
-- stdout --
	{"Name":"functional-763073","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-763073 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (315.302974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 logs -n 25: (1.006173164s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-763073 service list                                                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ service │ functional-763073 service list -o json                                                                                                              │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ service │ functional-763073 service --namespace=default --https --url hello-node                                                                              │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ service │ functional-763073 service hello-node --url --format={{.IP}}                                                                                         │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ service │ functional-763073 service hello-node --url                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount   │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001:/mount-9p --alsologtostderr -v=1              │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh     │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh     │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh     │ functional-763073 ssh -- ls -la /mount-9p                                                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh     │ functional-763073 ssh cat /mount-9p/test-1765860609398103848                                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh     │ functional-763073 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh     │ functional-763073 ssh sudo umount -f /mount-9p                                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ mount   │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3173719408/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh     │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh     │ functional-763073 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh     │ functional-763073 ssh -- ls -la /mount-9p                                                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh     │ functional-763073 ssh sudo umount -f /mount-9p                                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount   │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount1 --alsologtostderr -v=1                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount   │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount2 --alsologtostderr -v=1                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ mount   │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount3 --alsologtostderr -v=1                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh     │ functional-763073 ssh findmnt -T /mount1                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh     │ functional-763073 ssh findmnt -T /mount1                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh     │ functional-763073 ssh findmnt -T /mount2                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh     │ functional-763073 ssh findmnt -T /mount3                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ mount   │ -p functional-763073 --kill=true                                                                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:36:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:36:00.490248  481598 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:36:00.490394  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490398  481598 out.go:374] Setting ErrFile to fd 2...
	I1216 04:36:00.490402  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490827  481598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:36:00.491840  481598 out.go:368] Setting JSON to false
	I1216 04:36:00.492932  481598 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11907,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:36:00.493015  481598 start.go:143] virtualization:  
	I1216 04:36:00.496736  481598 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:36:00.500271  481598 notify.go:221] Checking for updates...
	I1216 04:36:00.500857  481598 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:36:00.504041  481598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:36:00.507246  481598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:36:00.510546  481598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:36:00.513957  481598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:36:00.517802  481598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:36:00.521529  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:00.521658  481598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:36:00.547571  481598 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:36:00.547683  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.612217  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.602438298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.612309  481598 docker.go:319] overlay module found
	I1216 04:36:00.615642  481598 out.go:179] * Using the docker driver based on existing profile
	I1216 04:36:00.618516  481598 start.go:309] selected driver: docker
	I1216 04:36:00.618544  481598 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.618637  481598 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:36:00.618758  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.679148  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.669430398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.679575  481598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:36:00.679604  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:00.679655  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:00.679698  481598 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.682841  481598 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:36:00.685829  481598 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:36:00.688866  481598 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:36:00.691890  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:00.691964  481598 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:36:00.691972  481598 cache.go:65] Caching tarball of preloaded images
	I1216 04:36:00.691982  481598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:36:00.692074  481598 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:36:00.692084  481598 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:36:00.692227  481598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:36:00.712798  481598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:36:00.712810  481598 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:36:00.712824  481598 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:36:00.712856  481598 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:36:00.712923  481598 start.go:364] duration metric: took 39.237µs to acquireMachinesLock for "functional-763073"
	I1216 04:36:00.712941  481598 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:36:00.712958  481598 fix.go:54] fixHost starting: 
	I1216 04:36:00.713253  481598 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:36:00.732242  481598 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:36:00.732263  481598 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:36:00.735664  481598 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:36:00.735723  481598 machine.go:94] provisionDockerMachine start ...
	I1216 04:36:00.735809  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.753493  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.753813  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.753819  481598 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:36:00.888929  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:00.888952  481598 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:36:00.889028  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.908330  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.908643  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.908652  481598 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:36:01.055703  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:01.055772  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.082824  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.083159  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.083173  481598 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:36:01.221846  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:36:01.221862  481598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:36:01.221883  481598 ubuntu.go:190] setting up certificates
	I1216 04:36:01.221900  481598 provision.go:84] configureAuth start
	I1216 04:36:01.221962  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:01.240557  481598 provision.go:143] copyHostCerts
	I1216 04:36:01.240641  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:36:01.240650  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:36:01.240725  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:36:01.240821  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:36:01.240825  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:36:01.240849  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:36:01.240902  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:36:01.240908  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:36:01.240929  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:36:01.240972  481598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:36:01.624943  481598 provision.go:177] copyRemoteCerts
	I1216 04:36:01.624996  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:36:01.625036  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.650668  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:01.753682  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:36:01.770658  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:36:01.788383  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:36:01.805726  481598 provision.go:87] duration metric: took 583.803742ms to configureAuth
	I1216 04:36:01.805744  481598 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:36:01.805933  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:01.806039  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.826667  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.826973  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.826985  481598 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:36:02.160545  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:36:02.160560  481598 machine.go:97] duration metric: took 1.424830052s to provisionDockerMachine
	I1216 04:36:02.160570  481598 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:36:02.160582  481598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:36:02.160662  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:36:02.160707  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.182446  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.281163  481598 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:36:02.284621  481598 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:36:02.284640  481598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:36:02.284650  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:36:02.284704  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:36:02.284795  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:36:02.284876  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:36:02.284919  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:36:02.293096  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:02.311133  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:36:02.329120  481598 start.go:296] duration metric: took 168.535354ms for postStartSetup
	I1216 04:36:02.329220  481598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:36:02.329269  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.348104  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.442235  481598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:36:02.448236  481598 fix.go:56] duration metric: took 1.735283267s for fixHost
	I1216 04:36:02.448253  481598 start.go:83] releasing machines lock for "functional-763073", held for 1.735323136s
	I1216 04:36:02.448324  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:02.466005  481598 ssh_runner.go:195] Run: cat /version.json
	I1216 04:36:02.466044  481598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:36:02.466046  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.466114  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.490975  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.491519  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.685578  481598 ssh_runner.go:195] Run: systemctl --version
	I1216 04:36:02.692865  481598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:36:02.731424  481598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:36:02.735810  481598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:36:02.735877  481598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:36:02.743925  481598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:36:02.743939  481598 start.go:496] detecting cgroup driver to use...
	I1216 04:36:02.743971  481598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:36:02.744017  481598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:36:02.759444  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:36:02.772624  481598 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:36:02.772678  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:36:02.788424  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:36:02.802435  481598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:36:02.920156  481598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:36:03.035227  481598 docker.go:234] disabling docker service ...
	I1216 04:36:03.035430  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:36:03.052008  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:36:03.065420  481598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:36:03.183071  481598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:36:03.294099  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:36:03.311925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:36:03.326859  481598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:36:03.326940  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.336429  481598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:36:03.336497  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.346614  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.357523  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.366947  481598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:36:03.376549  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.385465  481598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.394383  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.404860  481598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:36:03.413465  481598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:36:03.422752  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:03.536676  481598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:36:03.720606  481598 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:36:03.720702  481598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:36:03.724603  481598 start.go:564] Will wait 60s for crictl version
	I1216 04:36:03.724660  481598 ssh_runner.go:195] Run: which crictl
	I1216 04:36:03.728340  481598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:36:03.755140  481598 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:36:03.755232  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.787753  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.823457  481598 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:36:03.826282  481598 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:36:03.843358  481598 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:36:03.850470  481598 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 04:36:03.853320  481598 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:36:03.853444  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:03.853515  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.889904  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.889916  481598 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:36:03.889975  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.917662  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.917679  481598 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:36:03.917686  481598 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:36:03.917785  481598 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:36:03.917879  481598 ssh_runner.go:195] Run: crio config
	I1216 04:36:03.990629  481598 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 04:36:03.990650  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:03.990663  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:03.990677  481598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:36:03.990700  481598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:36:03.990828  481598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:36:03.990905  481598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:36:03.999067  481598 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:36:03.999139  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:36:04.008352  481598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:36:04.030586  481598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:36:04.045153  481598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 04:36:04.060527  481598 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:36:04.065456  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:04.194475  481598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:36:04.817563  481598 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:36:04.817574  481598 certs.go:195] generating shared ca certs ...
	I1216 04:36:04.817590  481598 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:36:04.817743  481598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:36:04.817795  481598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:36:04.817801  481598 certs.go:257] generating profile certs ...
	I1216 04:36:04.817883  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:36:04.817938  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:36:04.817975  481598 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:36:04.818092  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:36:04.818123  481598 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:36:04.818130  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:36:04.818156  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:36:04.818185  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:36:04.818212  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:36:04.818262  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:04.818840  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:36:04.841132  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:36:04.865044  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:36:04.885624  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:36:04.903731  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:36:04.922117  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:36:04.940753  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:36:04.958685  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:36:04.976252  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:36:04.996895  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:36:05.024451  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:36:05.043756  481598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:36:05.056987  481598 ssh_runner.go:195] Run: openssl version
	I1216 04:36:05.063602  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.071513  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:36:05.079286  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083120  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083179  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.124591  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:36:05.132537  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.139980  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:36:05.147726  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151460  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151517  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.192644  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:36:05.200305  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.207653  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:36:05.215074  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218794  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218861  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.260201  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:36:05.267700  481598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:36:05.271723  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:36:05.312770  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:36:05.354108  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:36:05.396136  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:36:05.437154  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:36:05.478283  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:36:05.519503  481598 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:05.519581  481598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:36:05.519651  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.550651  481598 cri.go:89] found id: ""
	I1216 04:36:05.550716  481598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:36:05.558332  481598 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:36:05.558341  481598 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:36:05.558398  481598 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:36:05.566851  481598 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.567385  481598 kubeconfig.go:125] found "functional-763073" server: "https://192.168.49.2:8441"
	I1216 04:36:05.568647  481598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:36:05.577205  481598 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:21:27.024069044 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 04:36:04.056943145 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 04:36:05.577214  481598 kubeadm.go:1161] stopping kube-system containers ...
	I1216 04:36:05.577232  481598 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 04:36:05.577291  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.613634  481598 cri.go:89] found id: ""
	I1216 04:36:05.613693  481598 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 04:36:05.631237  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:36:05.639373  481598 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 04:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 16 04:25 /etc/kubernetes/scheduler.conf
	
	I1216 04:36:05.639436  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:36:05.647869  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:36:05.655663  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.655719  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:36:05.663273  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.671183  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.671243  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.678591  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:36:05.686132  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.686188  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:36:05.693450  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:36:05.701540  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:05.748475  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.491126  481598 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742626292s)
	I1216 04:36:07.491187  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.697669  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.751926  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.807760  481598 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:36:07.807833  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.308888  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.808759  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.308977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.808282  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.307985  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.808951  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.308256  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.808637  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.308024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.307998  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.808659  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.308930  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.808879  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.308001  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.808638  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.308025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.808728  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.308874  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.807914  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.308153  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.808033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.308758  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.808709  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.308226  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.808665  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.308593  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.808198  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.308415  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.808582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.307967  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.808028  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.308762  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.808091  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.308960  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.308423  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.808157  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.308038  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.808057  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.308023  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.808946  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.308972  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.807943  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.307922  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.807937  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.308667  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.808045  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.308212  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.808619  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.308733  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.808032  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.308860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.808072  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.308007  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.808024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.307979  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.808901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.308808  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.808025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.308031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.808882  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.308837  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.807987  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.307961  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.808950  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.308266  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.808923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.308656  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.808860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.308034  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.808867  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.308569  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.307977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.308633  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.808122  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.307944  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.808798  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.308017  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.807983  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.308319  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.807968  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.807982  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.308783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.808921  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.308093  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.808677  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.308049  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.808424  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.308936  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.808179  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.308330  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.808590  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.308098  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.808705  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.308058  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.807911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.308881  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.808413  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.308020  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.808592  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.308911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.808175  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.307995  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.808695  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.808771  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.308033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.808432  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.308848  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.807977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.307980  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.808869  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.308433  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.808830  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.308901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.808015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:07.808111  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:07.837945  481598 cri.go:89] found id: ""
	I1216 04:37:07.837959  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.837965  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:07.837970  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:07.838028  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:07.869351  481598 cri.go:89] found id: ""
	I1216 04:37:07.869366  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.869372  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:07.869377  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:07.869436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:07.907276  481598 cri.go:89] found id: ""
	I1216 04:37:07.907290  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.907297  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:07.907302  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:07.907360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:07.933358  481598 cri.go:89] found id: ""
	I1216 04:37:07.933373  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.933380  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:07.933385  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:07.933443  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:07.960678  481598 cri.go:89] found id: ""
	I1216 04:37:07.960692  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.960699  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:07.960704  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:07.960761  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:07.986399  481598 cri.go:89] found id: ""
	I1216 04:37:07.986414  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.986421  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:07.986426  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:07.986483  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:08.015016  481598 cri.go:89] found id: ""
	I1216 04:37:08.015031  481598 logs.go:282] 0 containers: []
	W1216 04:37:08.015038  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:08.015046  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:08.015057  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:08.088739  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:08.088761  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:08.107036  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:08.107052  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:08.176727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:08.176736  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:08.176749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:08.244460  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:08.244483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:10.772766  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:10.783210  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:10.783271  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:10.811358  481598 cri.go:89] found id: ""
	I1216 04:37:10.811374  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.811382  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:10.811388  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:10.811451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:10.841691  481598 cri.go:89] found id: ""
	I1216 04:37:10.841705  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.841712  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:10.841717  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:10.841792  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:10.869111  481598 cri.go:89] found id: ""
	I1216 04:37:10.869133  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.869141  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:10.869146  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:10.869227  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:10.897617  481598 cri.go:89] found id: ""
	I1216 04:37:10.897632  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.897640  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:10.897646  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:10.897709  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:10.924814  481598 cri.go:89] found id: ""
	I1216 04:37:10.924829  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.924838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:10.924849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:10.924909  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:10.951147  481598 cri.go:89] found id: ""
	I1216 04:37:10.951162  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.951170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:10.951181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:10.951240  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:10.977944  481598 cri.go:89] found id: ""
	I1216 04:37:10.977958  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.977965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:10.977973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:10.977984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:11.046933  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:11.046953  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:11.062324  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:11.062340  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:11.128033  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:11.128044  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:11.128055  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:11.195835  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:11.195855  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:13.729443  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:13.739852  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:13.739911  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:13.765288  481598 cri.go:89] found id: ""
	I1216 04:37:13.765303  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.765310  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:13.765315  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:13.765372  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:13.791619  481598 cri.go:89] found id: ""
	I1216 04:37:13.791634  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.791641  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:13.791646  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:13.791713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:13.829008  481598 cri.go:89] found id: ""
	I1216 04:37:13.829021  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.829028  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:13.829033  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:13.829115  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:13.860708  481598 cri.go:89] found id: ""
	I1216 04:37:13.860722  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.860729  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:13.860734  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:13.860795  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:13.890573  481598 cri.go:89] found id: ""
	I1216 04:37:13.890587  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.890594  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:13.890600  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:13.890659  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:13.921520  481598 cri.go:89] found id: ""
	I1216 04:37:13.921535  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.921543  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:13.921555  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:13.921616  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:13.950847  481598 cri.go:89] found id: ""
	I1216 04:37:13.950864  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.950882  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:13.950890  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:13.950901  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:13.965697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:13.965713  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:14.040284  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:14.040295  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:14.040307  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:14.114244  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:14.114266  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:14.146926  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:14.146942  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.715163  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:16.725607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:16.725688  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:16.751194  481598 cri.go:89] found id: ""
	I1216 04:37:16.751208  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.751215  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:16.751220  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:16.751277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:16.780407  481598 cri.go:89] found id: ""
	I1216 04:37:16.780421  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.780428  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:16.780433  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:16.780496  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:16.806409  481598 cri.go:89] found id: ""
	I1216 04:37:16.806424  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.806431  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:16.806436  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:16.806504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:16.838220  481598 cri.go:89] found id: ""
	I1216 04:37:16.838235  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.838242  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:16.838247  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:16.838306  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:16.866315  481598 cri.go:89] found id: ""
	I1216 04:37:16.866329  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.866336  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:16.866341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:16.866414  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:16.899090  481598 cri.go:89] found id: ""
	I1216 04:37:16.899105  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.899112  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:16.899117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:16.899178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:16.924588  481598 cri.go:89] found id: ""
	I1216 04:37:16.924603  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.924611  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:16.924618  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:16.924630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.993464  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:16.993485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:17.009562  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:17.009582  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:17.075397  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:17.075408  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:17.075421  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:17.144979  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:17.145001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:19.675069  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:19.685090  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:19.685149  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:19.711697  481598 cri.go:89] found id: ""
	I1216 04:37:19.711712  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.711719  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:19.711724  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:19.711781  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:19.737017  481598 cri.go:89] found id: ""
	I1216 04:37:19.737031  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.737038  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:19.737043  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:19.737129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:19.764129  481598 cri.go:89] found id: ""
	I1216 04:37:19.764143  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.764150  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:19.764155  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:19.764210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:19.790063  481598 cri.go:89] found id: ""
	I1216 04:37:19.790077  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.790084  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:19.790098  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:19.790154  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:19.821689  481598 cri.go:89] found id: ""
	I1216 04:37:19.821703  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.821710  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:19.821716  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:19.821774  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:19.854088  481598 cri.go:89] found id: ""
	I1216 04:37:19.854103  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.854111  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:19.854116  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:19.854178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:19.893475  481598 cri.go:89] found id: ""
	I1216 04:37:19.893496  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.893505  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:19.893513  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:19.893524  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:19.961902  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:19.961916  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:19.961927  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:20.031206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:20.031233  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:20.062576  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:20.062596  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:20.132798  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:20.132818  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:22.649716  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:22.659636  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:22.659698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:22.684490  481598 cri.go:89] found id: ""
	I1216 04:37:22.684505  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.684512  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:22.684542  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:22.684599  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:22.709083  481598 cri.go:89] found id: ""
	I1216 04:37:22.709098  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.709105  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:22.709110  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:22.709165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:22.734473  481598 cri.go:89] found id: ""
	I1216 04:37:22.734487  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.734494  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:22.734499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:22.734557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:22.759459  481598 cri.go:89] found id: ""
	I1216 04:37:22.759473  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.759480  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:22.759485  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:22.759540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:22.784416  481598 cri.go:89] found id: ""
	I1216 04:37:22.784430  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.784437  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:22.784442  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:22.784508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:22.808823  481598 cri.go:89] found id: ""
	I1216 04:37:22.808837  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.808844  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:22.808849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:22.808906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:22.845939  481598 cri.go:89] found id: ""
	I1216 04:37:22.845965  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.845973  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:22.845980  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:22.846001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:22.939972  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:22.939998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:22.969984  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:22.970003  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:23.041537  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:23.041560  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:23.059445  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:23.059461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:23.127407  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.628052  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:25.638431  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:25.638504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:25.665151  481598 cri.go:89] found id: ""
	I1216 04:37:25.665164  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.665172  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:25.665176  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:25.665249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:25.695604  481598 cri.go:89] found id: ""
	I1216 04:37:25.695617  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.695625  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:25.695630  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:25.695691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:25.720754  481598 cri.go:89] found id: ""
	I1216 04:37:25.720768  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.720775  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:25.720780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:25.720839  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:25.746771  481598 cri.go:89] found id: ""
	I1216 04:37:25.746785  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.746792  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:25.746797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:25.746857  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:25.776233  481598 cri.go:89] found id: ""
	I1216 04:37:25.776247  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.776264  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:25.776269  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:25.776342  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:25.803891  481598 cri.go:89] found id: ""
	I1216 04:37:25.803914  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.803922  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:25.803927  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:25.804021  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:25.845002  481598 cri.go:89] found id: ""
	I1216 04:37:25.845016  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.845023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:25.845040  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:25.845053  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:25.921736  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.921746  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:25.921757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:25.989735  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:25.989756  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:26.020992  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:26.021012  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:26.094837  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:26.094856  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:28.610236  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:28.620641  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:28.620702  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:28.648449  481598 cri.go:89] found id: ""
	I1216 04:37:28.648463  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.648470  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:28.648480  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:28.648539  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:28.675317  481598 cri.go:89] found id: ""
	I1216 04:37:28.675332  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.675339  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:28.675344  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:28.675402  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:28.700978  481598 cri.go:89] found id: ""
	I1216 04:37:28.700992  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.700998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:28.701003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:28.701104  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:28.726354  481598 cri.go:89] found id: ""
	I1216 04:37:28.726367  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.726374  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:28.726379  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:28.726436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:28.752843  481598 cri.go:89] found id: ""
	I1216 04:37:28.752857  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.752864  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:28.752869  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:28.752927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:28.778190  481598 cri.go:89] found id: ""
	I1216 04:37:28.778205  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.778212  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:28.778217  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:28.778280  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:28.803029  481598 cri.go:89] found id: ""
	I1216 04:37:28.803044  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.803051  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:28.803059  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:28.803070  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:28.896742  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:28.896763  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:28.896776  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:28.964206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:28.964228  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:28.996487  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:28.996503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:29.063978  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:29.063998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:31.580896  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:31.591181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:31.591249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:31.616263  481598 cri.go:89] found id: ""
	I1216 04:37:31.616277  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.616284  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:31.616289  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:31.616345  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:31.641685  481598 cri.go:89] found id: ""
	I1216 04:37:31.641700  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.641707  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:31.641712  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:31.641771  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:31.667472  481598 cri.go:89] found id: ""
	I1216 04:37:31.667487  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.667495  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:31.667500  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:31.667557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:31.697212  481598 cri.go:89] found id: ""
	I1216 04:37:31.697241  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.697248  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:31.697253  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:31.697311  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:31.723185  481598 cri.go:89] found id: ""
	I1216 04:37:31.723199  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.723207  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:31.723212  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:31.723273  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:31.749934  481598 cri.go:89] found id: ""
	I1216 04:37:31.749957  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.749965  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:31.749970  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:31.750035  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:31.776884  481598 cri.go:89] found id: ""
	I1216 04:37:31.776905  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.776911  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:31.776922  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:31.776933  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:31.856147  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:31.856168  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:31.856188  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:31.928187  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:31.928207  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:31.960005  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:31.960023  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:32.031454  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:32.031474  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.550103  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:34.560823  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:34.560882  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:34.587067  481598 cri.go:89] found id: ""
	I1216 04:37:34.587082  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.587092  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:34.587097  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:34.587160  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:34.613934  481598 cri.go:89] found id: ""
	I1216 04:37:34.613949  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.613956  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:34.613961  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:34.614018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:34.639997  481598 cri.go:89] found id: ""
	I1216 04:37:34.640011  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.640018  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:34.640023  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:34.640087  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:34.666140  481598 cri.go:89] found id: ""
	I1216 04:37:34.666154  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.666161  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:34.666166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:34.666226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:34.692116  481598 cri.go:89] found id: ""
	I1216 04:37:34.692131  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.692138  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:34.692143  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:34.692203  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:34.717134  481598 cri.go:89] found id: ""
	I1216 04:37:34.717148  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.717156  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:34.717161  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:34.717228  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:34.743931  481598 cri.go:89] found id: ""
	I1216 04:37:34.743946  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.743963  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:34.743971  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:34.743983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:34.809826  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:34.809849  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.827619  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:34.827636  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:34.903666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:34.903676  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:34.903686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:34.972944  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:34.972967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:37.507549  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:37.517802  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:37.517863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:37.543131  481598 cri.go:89] found id: ""
	I1216 04:37:37.543147  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.543155  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:37.543167  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:37.543224  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:37.568202  481598 cri.go:89] found id: ""
	I1216 04:37:37.568216  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.568223  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:37.568231  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:37.568288  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:37.593976  481598 cri.go:89] found id: ""
	I1216 04:37:37.593991  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.593998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:37.594003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:37.594066  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:37.619760  481598 cri.go:89] found id: ""
	I1216 04:37:37.619774  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.619781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:37.619787  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:37.619848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:37.644836  481598 cri.go:89] found id: ""
	I1216 04:37:37.644850  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.644857  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:37.644862  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:37.644921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:37.670454  481598 cri.go:89] found id: ""
	I1216 04:37:37.670468  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.670476  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:37.670481  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:37.670537  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:37.695742  481598 cri.go:89] found id: ""
	I1216 04:37:37.695762  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.695769  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:37.695777  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:37.695787  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:37.759713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:37.759732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:37.774589  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:37.774606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:37.849933  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:37.849945  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:37.849955  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:37.928468  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:37.928489  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.459800  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:40.470285  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:40.470349  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:40.499380  481598 cri.go:89] found id: ""
	I1216 04:37:40.499394  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.499401  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:40.499406  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:40.499464  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:40.528986  481598 cri.go:89] found id: ""
	I1216 04:37:40.529000  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.529007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:40.529012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:40.529089  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:40.555623  481598 cri.go:89] found id: ""
	I1216 04:37:40.555638  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.555646  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:40.555651  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:40.555708  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:40.581298  481598 cri.go:89] found id: ""
	I1216 04:37:40.581312  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.581319  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:40.581324  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:40.581382  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:40.611085  481598 cri.go:89] found id: ""
	I1216 04:37:40.611099  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.611106  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:40.611113  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:40.611173  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:40.636162  481598 cri.go:89] found id: ""
	I1216 04:37:40.636178  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.636185  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:40.636190  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:40.636250  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:40.664257  481598 cri.go:89] found id: ""
	I1216 04:37:40.664272  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.664279  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:40.664287  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:40.664299  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:40.680011  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:40.680027  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:40.745907  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:40.745919  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:40.745932  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:40.814715  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:40.814735  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.859159  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:40.859181  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.432718  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:43.443193  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:43.443264  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:43.469157  481598 cri.go:89] found id: ""
	I1216 04:37:43.469187  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.469195  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:43.469200  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:43.469323  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:43.494783  481598 cri.go:89] found id: ""
	I1216 04:37:43.494796  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.494804  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:43.494809  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:43.494869  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:43.521488  481598 cri.go:89] found id: ""
	I1216 04:37:43.521502  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.521509  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:43.521514  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:43.521573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:43.550707  481598 cri.go:89] found id: ""
	I1216 04:37:43.550721  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.550728  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:43.550733  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:43.550791  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:43.579977  481598 cri.go:89] found id: ""
	I1216 04:37:43.579991  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.579997  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:43.580002  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:43.580064  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:43.605041  481598 cri.go:89] found id: ""
	I1216 04:37:43.605056  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.605143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:43.605149  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:43.605208  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:43.631632  481598 cri.go:89] found id: ""
	I1216 04:37:43.631658  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.631665  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:43.631672  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:43.631691  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.701085  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:43.701111  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:43.716379  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:43.716401  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:43.778569  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:43.778594  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:43.778606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:43.850663  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:43.850686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.388473  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:46.398649  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:46.398713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:46.425758  481598 cri.go:89] found id: ""
	I1216 04:37:46.425772  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.425780  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:46.425785  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:46.425843  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:46.453363  481598 cri.go:89] found id: ""
	I1216 04:37:46.453377  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.453384  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:46.453389  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:46.453450  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:46.479051  481598 cri.go:89] found id: ""
	I1216 04:37:46.479066  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.479074  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:46.479079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:46.479135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:46.509758  481598 cri.go:89] found id: ""
	I1216 04:37:46.509773  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.509781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:46.509786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:46.509849  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:46.536775  481598 cri.go:89] found id: ""
	I1216 04:37:46.536788  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.536795  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:46.536801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:46.536870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:46.562238  481598 cri.go:89] found id: ""
	I1216 04:37:46.562253  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.562262  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:46.562268  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:46.562326  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:46.588577  481598 cri.go:89] found id: ""
	I1216 04:37:46.588591  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.588598  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:46.588606  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:46.588617  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:46.658427  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:46.658447  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.692280  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:46.692304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:46.758854  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:46.758874  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:46.778062  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:46.778079  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:46.855875  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.357557  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:49.367602  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:49.367665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:49.393022  481598 cri.go:89] found id: ""
	I1216 04:37:49.393037  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.393044  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:49.393049  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:49.393125  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:49.421701  481598 cri.go:89] found id: ""
	I1216 04:37:49.421716  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.421723  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:49.421728  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:49.421789  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:49.447139  481598 cri.go:89] found id: ""
	I1216 04:37:49.447154  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.447161  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:49.447166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:49.447226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:49.472003  481598 cri.go:89] found id: ""
	I1216 04:37:49.472018  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.472026  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:49.472032  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:49.472090  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:49.497762  481598 cri.go:89] found id: ""
	I1216 04:37:49.497782  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.497790  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:49.497794  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:49.497853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:49.527970  481598 cri.go:89] found id: ""
	I1216 04:37:49.527984  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.527992  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:49.527997  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:49.528055  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:49.554573  481598 cri.go:89] found id: ""
	I1216 04:37:49.554587  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.554596  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:49.554604  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:49.554615  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:49.620959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:49.620979  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:49.636096  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:49.636115  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:49.705535  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.705545  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:49.705556  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:49.774081  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:49.774101  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.303119  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:52.313248  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:52.313317  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:52.339092  481598 cri.go:89] found id: ""
	I1216 04:37:52.339106  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.339113  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:52.339118  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:52.339181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:52.370928  481598 cri.go:89] found id: ""
	I1216 04:37:52.370942  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.370949  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:52.370954  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:52.371011  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:52.395986  481598 cri.go:89] found id: ""
	I1216 04:37:52.396000  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.396007  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:52.396012  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:52.396068  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:52.425010  481598 cri.go:89] found id: ""
	I1216 04:37:52.425024  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.425031  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:52.425036  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:52.425118  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:52.450781  481598 cri.go:89] found id: ""
	I1216 04:37:52.450796  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.450803  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:52.450808  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:52.450867  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:52.476589  481598 cri.go:89] found id: ""
	I1216 04:37:52.476603  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.476611  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:52.476617  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:52.476675  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:52.503929  481598 cri.go:89] found id: ""
	I1216 04:37:52.503944  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.503951  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:52.503959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:52.503970  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:52.519124  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:52.519149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:52.587049  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:52.587060  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:52.587072  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:52.657393  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:52.657415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.686271  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:52.686289  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.258225  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:55.268276  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:55.268339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:55.295458  481598 cri.go:89] found id: ""
	I1216 04:37:55.295471  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.295479  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:55.295484  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:55.295550  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:55.322181  481598 cri.go:89] found id: ""
	I1216 04:37:55.322195  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.322202  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:55.322207  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:55.322315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:55.347301  481598 cri.go:89] found id: ""
	I1216 04:37:55.347316  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.347323  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:55.347329  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:55.347390  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:55.372973  481598 cri.go:89] found id: ""
	I1216 04:37:55.372988  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.372995  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:55.373000  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:55.373057  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:55.398159  481598 cri.go:89] found id: ""
	I1216 04:37:55.398173  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.398179  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:55.398184  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:55.398245  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:55.423108  481598 cri.go:89] found id: ""
	I1216 04:37:55.423122  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.423128  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:55.423133  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:55.423198  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:55.449345  481598 cri.go:89] found id: ""
	I1216 04:37:55.449360  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.449367  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:55.449375  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:55.449397  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.514641  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:55.514662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:55.529353  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:55.529369  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:55.598810  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:55.598830  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:55.598842  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:55.666947  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:55.666967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:58.197584  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:58.208946  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:58.209018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:58.234805  481598 cri.go:89] found id: ""
	I1216 04:37:58.234819  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.234826  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:58.234831  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:58.234886  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:58.259158  481598 cri.go:89] found id: ""
	I1216 04:37:58.259171  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.259178  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:58.259183  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:58.259241  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:58.286151  481598 cri.go:89] found id: ""
	I1216 04:37:58.286165  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.286172  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:58.286177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:58.286234  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:58.310737  481598 cri.go:89] found id: ""
	I1216 04:37:58.310750  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.310757  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:58.310762  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:58.310817  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:58.334963  481598 cri.go:89] found id: ""
	I1216 04:37:58.334978  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.334985  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:58.334989  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:58.335054  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:58.363884  481598 cri.go:89] found id: ""
	I1216 04:37:58.363910  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.363918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:58.363924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:58.363992  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:58.387948  481598 cri.go:89] found id: ""
	I1216 04:37:58.387961  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.387968  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:58.387977  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:58.387988  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:58.452873  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:58.452892  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:58.468670  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:58.468688  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:58.537376  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:58.537385  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:58.537396  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:58.606317  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:58.606339  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:01.135427  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:01.146890  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:01.146955  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:01.174260  481598 cri.go:89] found id: ""
	I1216 04:38:01.174275  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.174282  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:01.174287  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:01.174347  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:01.199944  481598 cri.go:89] found id: ""
	I1216 04:38:01.199958  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.199965  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:01.199970  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:01.200033  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:01.228798  481598 cri.go:89] found id: ""
	I1216 04:38:01.228814  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.228820  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:01.228825  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:01.228884  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:01.255775  481598 cri.go:89] found id: ""
	I1216 04:38:01.255789  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.255796  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:01.255801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:01.255860  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:01.281657  481598 cri.go:89] found id: ""
	I1216 04:38:01.281671  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.281678  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:01.281683  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:01.281742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:01.307766  481598 cri.go:89] found id: ""
	I1216 04:38:01.307779  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.307786  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:01.307791  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:01.307851  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:01.333581  481598 cri.go:89] found id: ""
	I1216 04:38:01.333595  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.333602  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:01.333610  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:01.333621  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:01.399337  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:01.399356  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:01.414266  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:01.414283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:01.482637  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:01.482650  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:01.482662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:01.550883  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:01.550905  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.081199  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:04.093060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:04.093177  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:04.125499  481598 cri.go:89] found id: ""
	I1216 04:38:04.125513  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.125521  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:04.125526  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:04.125595  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:04.151973  481598 cri.go:89] found id: ""
	I1216 04:38:04.151987  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.151994  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:04.151999  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:04.152058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:04.180246  481598 cri.go:89] found id: ""
	I1216 04:38:04.180260  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.180266  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:04.180271  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:04.180328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:04.207652  481598 cri.go:89] found id: ""
	I1216 04:38:04.207665  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.207672  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:04.207678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:04.207735  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:04.233457  481598 cri.go:89] found id: ""
	I1216 04:38:04.233470  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.233477  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:04.233483  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:04.233540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:04.259854  481598 cri.go:89] found id: ""
	I1216 04:38:04.259868  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.259875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:04.259880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:04.259941  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:04.285804  481598 cri.go:89] found id: ""
	I1216 04:38:04.285818  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.285825  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:04.285832  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:04.285843  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:04.364313  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:04.364343  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.397537  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:04.397559  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:04.466334  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:04.466358  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:04.481695  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:04.481712  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:04.549601  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:07.049858  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:07.060224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:07.060286  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:07.095538  481598 cri.go:89] found id: ""
	I1216 04:38:07.095552  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.095558  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:07.095572  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:07.095630  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:07.134098  481598 cri.go:89] found id: ""
	I1216 04:38:07.134113  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.134120  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:07.134125  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:07.134181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:07.160282  481598 cri.go:89] found id: ""
	I1216 04:38:07.160296  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.160312  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:07.160317  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:07.160375  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:07.186194  481598 cri.go:89] found id: ""
	I1216 04:38:07.186208  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.186215  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:07.186220  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:07.186277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:07.211185  481598 cri.go:89] found id: ""
	I1216 04:38:07.211198  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.211211  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:07.211216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:07.211274  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:07.236131  481598 cri.go:89] found id: ""
	I1216 04:38:07.236145  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.236171  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:07.236177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:07.236243  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:07.262438  481598 cri.go:89] found id: ""
	I1216 04:38:07.262452  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.262459  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:07.262467  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:07.262477  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:07.331225  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:07.331246  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:07.359219  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:07.359236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:07.426207  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:07.426225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:07.441345  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:07.441364  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:07.509422  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.011147  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:10.023261  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:10.023327  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:10.050971  481598 cri.go:89] found id: ""
	I1216 04:38:10.050986  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.050994  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:10.050999  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:10.051073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:10.085339  481598 cri.go:89] found id: ""
	I1216 04:38:10.085353  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.085360  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:10.085366  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:10.085434  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:10.124529  481598 cri.go:89] found id: ""
	I1216 04:38:10.124543  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.124551  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:10.124556  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:10.124624  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:10.164418  481598 cri.go:89] found id: ""
	I1216 04:38:10.164434  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.164442  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:10.164448  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:10.164517  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:10.190732  481598 cri.go:89] found id: ""
	I1216 04:38:10.190746  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.190753  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:10.190758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:10.190815  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:10.216314  481598 cri.go:89] found id: ""
	I1216 04:38:10.216339  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.216346  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:10.216352  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:10.216419  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:10.241726  481598 cri.go:89] found id: ""
	I1216 04:38:10.241747  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.241755  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:10.241768  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:10.241780  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:10.314496  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.314506  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:10.314520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:10.383929  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:10.383952  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:10.414686  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:10.414703  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:10.480296  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:10.480315  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:12.997386  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:13.013029  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:13.013152  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:13.043756  481598 cri.go:89] found id: ""
	I1216 04:38:13.043772  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.043779  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:13.043784  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:13.043841  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:13.078538  481598 cri.go:89] found id: ""
	I1216 04:38:13.078552  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.078559  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:13.078564  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:13.078625  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:13.107509  481598 cri.go:89] found id: ""
	I1216 04:38:13.107523  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.107530  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:13.107535  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:13.107590  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:13.144886  481598 cri.go:89] found id: ""
	I1216 04:38:13.144900  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.144907  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:13.144912  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:13.144967  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:13.172261  481598 cri.go:89] found id: ""
	I1216 04:38:13.172275  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.172282  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:13.172287  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:13.172346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:13.200255  481598 cri.go:89] found id: ""
	I1216 04:38:13.200270  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.200277  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:13.200282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:13.200339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:13.231840  481598 cri.go:89] found id: ""
	I1216 04:38:13.231855  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.231864  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:13.231871  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:13.231882  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:13.305140  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:13.305162  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:13.320119  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:13.320135  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:13.384652  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:13.384662  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:13.384672  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:13.452891  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:13.452913  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:15.986467  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:15.996642  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:15.996705  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:16.023730  481598 cri.go:89] found id: ""
	I1216 04:38:16.023745  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.023752  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:16.023757  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:16.023814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:16.048187  481598 cri.go:89] found id: ""
	I1216 04:38:16.048202  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.048209  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:16.048214  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:16.048270  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:16.084197  481598 cri.go:89] found id: ""
	I1216 04:38:16.084210  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.084217  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:16.084222  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:16.084279  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:16.114000  481598 cri.go:89] found id: ""
	I1216 04:38:16.114014  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.114021  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:16.114026  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:16.114095  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:16.146003  481598 cri.go:89] found id: ""
	I1216 04:38:16.146016  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.146023  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:16.146028  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:16.146085  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:16.171053  481598 cri.go:89] found id: ""
	I1216 04:38:16.171067  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.171074  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:16.171079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:16.171146  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:16.195607  481598 cri.go:89] found id: ""
	I1216 04:38:16.195621  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.195629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:16.195637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:16.195647  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:16.261510  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:16.261531  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:16.276956  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:16.276972  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:16.337904  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:16.337914  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:16.337925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:16.407434  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:16.407456  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:18.938513  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:18.948612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:18.948671  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:18.973989  481598 cri.go:89] found id: ""
	I1216 04:38:18.974004  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.974011  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:18.974016  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:18.974076  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:18.999416  481598 cri.go:89] found id: ""
	I1216 04:38:18.999430  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.999437  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:18.999442  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:18.999499  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:19.036420  481598 cri.go:89] found id: ""
	I1216 04:38:19.036433  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.036440  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:19.036444  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:19.036500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:19.063584  481598 cri.go:89] found id: ""
	I1216 04:38:19.063600  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.063617  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:19.063623  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:19.063694  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:19.099252  481598 cri.go:89] found id: ""
	I1216 04:38:19.099275  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.099283  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:19.099289  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:19.099363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:19.126285  481598 cri.go:89] found id: ""
	I1216 04:38:19.126307  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.126315  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:19.126320  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:19.126387  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:19.151707  481598 cri.go:89] found id: ""
	I1216 04:38:19.151722  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.151738  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:19.151746  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:19.151757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:19.216698  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:19.216723  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:19.231764  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:19.231783  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:19.299324  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:19.299334  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:19.299344  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:19.368556  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:19.368580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:21.906105  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:21.916147  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:21.916206  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:21.941307  481598 cri.go:89] found id: ""
	I1216 04:38:21.941321  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.941328  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:21.941333  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:21.941399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:21.966745  481598 cri.go:89] found id: ""
	I1216 04:38:21.966760  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.966767  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:21.966772  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:21.966831  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:21.996091  481598 cri.go:89] found id: ""
	I1216 04:38:21.996106  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.996113  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:21.996117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:21.996176  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:22.022731  481598 cri.go:89] found id: ""
	I1216 04:38:22.022746  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.022753  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:22.022758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:22.022820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:22.055034  481598 cri.go:89] found id: ""
	I1216 04:38:22.055048  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.055067  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:22.055072  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:22.055136  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:22.106853  481598 cri.go:89] found id: ""
	I1216 04:38:22.106868  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.106875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:22.106880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:22.106949  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:22.143371  481598 cri.go:89] found id: ""
	I1216 04:38:22.143385  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.143392  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:22.143399  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:22.143410  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:22.209056  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:22.209083  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:22.209096  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:22.276728  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:22.276748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:22.308467  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:22.308483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:22.373121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:22.373141  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:24.888068  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:24.898375  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:24.898438  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:24.922926  481598 cri.go:89] found id: ""
	I1216 04:38:24.922940  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.922953  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:24.922958  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:24.923018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:24.948274  481598 cri.go:89] found id: ""
	I1216 04:38:24.948288  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.948296  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:24.948300  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:24.948366  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:24.973866  481598 cri.go:89] found id: ""
	I1216 04:38:24.973880  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.973888  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:24.973893  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:24.973950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:24.999743  481598 cri.go:89] found id: ""
	I1216 04:38:24.999757  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.999764  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:24.999769  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:24.999827  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:25.030266  481598 cri.go:89] found id: ""
	I1216 04:38:25.030280  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.030298  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:25.030303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:25.030363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:25.055976  481598 cri.go:89] found id: ""
	I1216 04:38:25.055991  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.056008  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:25.056014  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:25.056070  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:25.096522  481598 cri.go:89] found id: ""
	I1216 04:38:25.096537  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.096553  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:25.096568  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:25.096580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:25.171632  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:25.171649  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:25.171661  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:25.239309  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:25.239330  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:25.268791  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:25.268807  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:25.345864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:25.345887  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:27.863617  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:27.874797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:27.874872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:27.904044  481598 cri.go:89] found id: ""
	I1216 04:38:27.904057  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.904064  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:27.904070  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:27.904135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:27.930157  481598 cri.go:89] found id: ""
	I1216 04:38:27.930172  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.930179  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:27.930184  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:27.930248  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:27.960176  481598 cri.go:89] found id: ""
	I1216 04:38:27.960203  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.960211  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:27.960216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:27.960287  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:27.986202  481598 cri.go:89] found id: ""
	I1216 04:38:27.986215  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.986222  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:27.986227  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:27.986284  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:28.017804  481598 cri.go:89] found id: ""
	I1216 04:38:28.017818  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.017825  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:28.017830  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:28.017899  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:28.048381  481598 cri.go:89] found id: ""
	I1216 04:38:28.048397  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.048404  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:28.048410  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:28.048469  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:28.089010  481598 cri.go:89] found id: ""
	I1216 04:38:28.089024  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.089032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:28.089040  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:28.089051  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:28.107163  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:28.107185  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:28.185125  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:28.185136  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:28.185146  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:28.253973  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:28.253993  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:28.284589  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:28.284611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:30.850377  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:30.860658  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:30.860717  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:30.885504  481598 cri.go:89] found id: ""
	I1216 04:38:30.885519  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.885526  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:30.885531  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:30.885592  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:30.910273  481598 cri.go:89] found id: ""
	I1216 04:38:30.910287  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.910294  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:30.910299  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:30.910360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:30.935120  481598 cri.go:89] found id: ""
	I1216 04:38:30.935134  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.935140  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:30.935145  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:30.935200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:30.960866  481598 cri.go:89] found id: ""
	I1216 04:38:30.960879  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.960886  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:30.960891  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:30.960947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:30.986279  481598 cri.go:89] found id: ""
	I1216 04:38:30.986294  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.986302  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:30.986306  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:30.986367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:31.014463  481598 cri.go:89] found id: ""
	I1216 04:38:31.014486  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.014493  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:31.014499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:31.014561  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:31.041177  481598 cri.go:89] found id: ""
	I1216 04:38:31.041198  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.041205  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:31.041213  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:31.041248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:31.083930  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:31.083946  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:31.155612  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:31.155632  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:31.171599  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:31.171616  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:31.238570  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:31.238580  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:31.238590  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:33.806752  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:33.816682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:33.816748  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:33.841422  481598 cri.go:89] found id: ""
	I1216 04:38:33.841437  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.841444  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:33.841449  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:33.841508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:33.866870  481598 cri.go:89] found id: ""
	I1216 04:38:33.866884  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.866891  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:33.866896  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:33.866954  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:33.892338  481598 cri.go:89] found id: ""
	I1216 04:38:33.892352  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.892360  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:33.892365  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:33.892428  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:33.920004  481598 cri.go:89] found id: ""
	I1216 04:38:33.920018  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.920025  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:33.920030  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:33.920088  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:33.950159  481598 cri.go:89] found id: ""
	I1216 04:38:33.950173  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.950180  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:33.950185  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:33.950244  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:33.976065  481598 cri.go:89] found id: ""
	I1216 04:38:33.976079  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.976086  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:33.976092  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:33.976172  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:34.001694  481598 cri.go:89] found id: ""
	I1216 04:38:34.001710  481598 logs.go:282] 0 containers: []
	W1216 04:38:34.001721  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:34.001729  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:34.001741  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:34.041633  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:34.041651  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:34.108611  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:34.108630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:34.125509  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:34.125525  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:34.196710  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:34.196735  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:34.196746  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:36.764814  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:36.774892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:36.774950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:36.800624  481598 cri.go:89] found id: ""
	I1216 04:38:36.800640  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.800647  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:36.800652  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:36.800715  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:36.826259  481598 cri.go:89] found id: ""
	I1216 04:38:36.826274  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.826281  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:36.826286  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:36.826343  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:36.852246  481598 cri.go:89] found id: ""
	I1216 04:38:36.852269  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.852277  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:36.852282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:36.852351  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:36.877659  481598 cri.go:89] found id: ""
	I1216 04:38:36.877680  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.877688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:36.877693  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:36.877752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:36.903365  481598 cri.go:89] found id: ""
	I1216 04:38:36.903379  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.903385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:36.903390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:36.903446  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:36.928313  481598 cri.go:89] found id: ""
	I1216 04:38:36.928328  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.928335  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:36.928341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:36.928399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:36.953145  481598 cri.go:89] found id: ""
	I1216 04:38:36.953158  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.953165  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:36.953172  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:36.953182  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:37.018934  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:37.018956  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:37.036483  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:37.036500  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:37.114492  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:37.114503  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:37.114514  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:37.191646  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:37.191667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:39.722033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:39.731793  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:39.731852  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:39.756808  481598 cri.go:89] found id: ""
	I1216 04:38:39.756822  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.756829  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:39.756834  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:39.756891  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:39.782419  481598 cri.go:89] found id: ""
	I1216 04:38:39.782440  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.782448  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:39.782453  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:39.782510  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:39.807545  481598 cri.go:89] found id: ""
	I1216 04:38:39.807559  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.807576  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:39.807581  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:39.807639  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:39.836801  481598 cri.go:89] found id: ""
	I1216 04:38:39.836816  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.836832  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:39.836844  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:39.836914  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:39.861851  481598 cri.go:89] found id: ""
	I1216 04:38:39.861865  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.861872  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:39.861877  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:39.861935  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:39.891116  481598 cri.go:89] found id: ""
	I1216 04:38:39.891130  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.891137  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:39.891144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:39.891200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:39.917011  481598 cri.go:89] found id: ""
	I1216 04:38:39.917026  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.917032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:39.917040  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:39.917050  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:39.983103  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:39.983124  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:39.997812  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:39.997829  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:40.072880  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:40.072890  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:40.072902  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:40.155262  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:40.155284  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:42.686177  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:42.696709  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:42.696766  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:42.723669  481598 cri.go:89] found id: ""
	I1216 04:38:42.723684  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.723691  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:42.723697  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:42.723762  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:42.751573  481598 cri.go:89] found id: ""
	I1216 04:38:42.751587  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.751594  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:42.751599  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:42.751660  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:42.777155  481598 cri.go:89] found id: ""
	I1216 04:38:42.777170  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.777177  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:42.777182  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:42.777253  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:42.802762  481598 cri.go:89] found id: ""
	I1216 04:38:42.802776  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.802783  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:42.802788  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:42.802847  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:42.828278  481598 cri.go:89] found id: ""
	I1216 04:38:42.828291  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.828299  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:42.828303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:42.828361  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:42.854186  481598 cri.go:89] found id: ""
	I1216 04:38:42.854211  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.854219  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:42.854224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:42.854281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:42.879809  481598 cri.go:89] found id: ""
	I1216 04:38:42.879822  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.879831  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:42.879839  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:42.879851  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:42.945305  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:42.945315  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:42.945326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:43.019176  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:43.019199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:43.048232  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:43.048248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:43.128355  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:43.128376  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.644135  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:45.654691  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:45.654750  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:45.682129  481598 cri.go:89] found id: ""
	I1216 04:38:45.682143  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.682151  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:45.682156  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:45.682216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:45.706956  481598 cri.go:89] found id: ""
	I1216 04:38:45.706970  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.706977  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:45.706981  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:45.707040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:45.732479  481598 cri.go:89] found id: ""
	I1216 04:38:45.732493  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.732500  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:45.732505  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:45.732563  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:45.757526  481598 cri.go:89] found id: ""
	I1216 04:38:45.757540  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.757547  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:45.757553  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:45.757610  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:45.787392  481598 cri.go:89] found id: ""
	I1216 04:38:45.787407  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.787414  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:45.787419  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:45.787481  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:45.817452  481598 cri.go:89] found id: ""
	I1216 04:38:45.817477  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.817484  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:45.817490  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:45.817549  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:45.843705  481598 cri.go:89] found id: ""
	I1216 04:38:45.843732  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.843744  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:45.843752  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:45.843762  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:45.909394  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:45.909415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.924650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:45.924667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:45.985242  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:45.985251  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:45.985262  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:46.060306  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:46.060333  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.603994  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:48.614177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:48.614238  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:48.639598  481598 cri.go:89] found id: ""
	I1216 04:38:48.639612  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.639620  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:48.639625  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:48.639685  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:48.668445  481598 cri.go:89] found id: ""
	I1216 04:38:48.668458  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.668465  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:48.668470  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:48.668525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:48.698321  481598 cri.go:89] found id: ""
	I1216 04:38:48.698336  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.698343  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:48.698348  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:48.698410  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:48.724272  481598 cri.go:89] found id: ""
	I1216 04:38:48.724286  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.724293  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:48.724298  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:48.724367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:48.748881  481598 cri.go:89] found id: ""
	I1216 04:38:48.748895  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.748902  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:48.748907  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:48.748965  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:48.773436  481598 cri.go:89] found id: ""
	I1216 04:38:48.773450  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.773456  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:48.773462  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:48.773518  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:48.798866  481598 cri.go:89] found id: ""
	I1216 04:38:48.798880  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.798887  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:48.798894  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:48.798904  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.830890  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:48.830906  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:48.897158  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:48.897179  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:48.912309  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:48.912326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:48.979282  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:48.979293  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:48.979304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:51.548916  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:51.559621  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:51.559691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:51.585186  481598 cri.go:89] found id: ""
	I1216 04:38:51.585201  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.585208  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:51.585214  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:51.585281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:51.610438  481598 cri.go:89] found id: ""
	I1216 04:38:51.610454  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.610462  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:51.610466  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:51.610523  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:51.640579  481598 cri.go:89] found id: ""
	I1216 04:38:51.640594  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.640601  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:51.640607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:51.640665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:51.667755  481598 cri.go:89] found id: ""
	I1216 04:38:51.667770  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.667778  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:51.667783  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:51.667840  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:51.697652  481598 cri.go:89] found id: ""
	I1216 04:38:51.697666  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.697673  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:51.697678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:51.697738  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:51.723173  481598 cri.go:89] found id: ""
	I1216 04:38:51.723188  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.723195  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:51.723200  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:51.723266  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:51.748836  481598 cri.go:89] found id: ""
	I1216 04:38:51.748851  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.748858  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:51.748865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:51.748876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:51.790045  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:51.790061  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:51.857688  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:51.857707  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:51.872771  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:51.872788  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:51.934401  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:51.934410  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:51.934420  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:54.502288  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:54.513093  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:54.513158  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:54.540102  481598 cri.go:89] found id: ""
	I1216 04:38:54.540116  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.540124  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:54.540129  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:54.540187  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:54.565581  481598 cri.go:89] found id: ""
	I1216 04:38:54.565597  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.565605  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:54.565609  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:54.565673  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:54.594141  481598 cri.go:89] found id: ""
	I1216 04:38:54.594155  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.594163  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:54.594167  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:54.594229  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:54.620437  481598 cri.go:89] found id: ""
	I1216 04:38:54.620451  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.620459  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:54.620464  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:54.620521  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:54.651777  481598 cri.go:89] found id: ""
	I1216 04:38:54.651792  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.651800  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:54.651805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:54.651862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:54.677522  481598 cri.go:89] found id: ""
	I1216 04:38:54.677536  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.677544  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:54.677549  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:54.677608  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:54.702758  481598 cri.go:89] found id: ""
	I1216 04:38:54.702774  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.702782  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:54.702789  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:54.702800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:54.731468  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:54.731485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:54.801713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:54.801732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:54.816784  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:54.816800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:54.890418  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:54.890428  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:54.890439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:57.462843  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:57.473005  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:57.473096  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:57.498656  481598 cri.go:89] found id: ""
	I1216 04:38:57.498670  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.498676  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:57.498682  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:57.498740  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:57.524589  481598 cri.go:89] found id: ""
	I1216 04:38:57.524604  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.524611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:57.524616  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:57.524683  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:57.549819  481598 cri.go:89] found id: ""
	I1216 04:38:57.549833  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.549844  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:57.549849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:57.549906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:57.580220  481598 cri.go:89] found id: ""
	I1216 04:38:57.580234  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.580241  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:57.580246  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:57.580303  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:57.605587  481598 cri.go:89] found id: ""
	I1216 04:38:57.605600  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.605607  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:57.605612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:57.605668  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:57.630691  481598 cri.go:89] found id: ""
	I1216 04:38:57.630706  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.630721  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:57.630726  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:57.630784  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:57.655557  481598 cri.go:89] found id: ""
	I1216 04:38:57.655571  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.655579  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:57.655588  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:57.655598  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:57.686872  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:57.686888  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:57.752402  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:57.752422  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:57.767423  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:57.767439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:57.831611  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:57.831621  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:57.831631  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.403298  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:00.416780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:00.416848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:00.445602  481598 cri.go:89] found id: ""
	I1216 04:39:00.445618  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.445626  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:00.445632  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:00.445698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:00.480454  481598 cri.go:89] found id: ""
	I1216 04:39:00.480470  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.480478  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:00.480483  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:00.480548  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:00.509654  481598 cri.go:89] found id: ""
	I1216 04:39:00.509669  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.509677  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:00.509682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:00.509746  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:00.539666  481598 cri.go:89] found id: ""
	I1216 04:39:00.539681  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.539688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:00.539694  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:00.539755  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:00.567301  481598 cri.go:89] found id: ""
	I1216 04:39:00.567316  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.567323  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:00.567328  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:00.567388  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:00.593431  481598 cri.go:89] found id: ""
	I1216 04:39:00.593446  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.593453  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:00.593458  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:00.593526  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:00.618713  481598 cri.go:89] found id: ""
	I1216 04:39:00.618728  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.618736  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:00.618743  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:00.618754  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:00.687858  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:00.687869  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:00.687880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.757046  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:00.757071  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:00.784949  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:00.784966  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:00.850312  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:00.850331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.365582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:03.376104  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:03.376164  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:03.402517  481598 cri.go:89] found id: ""
	I1216 04:39:03.402532  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.402539  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:03.402544  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:03.402605  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:03.428281  481598 cri.go:89] found id: ""
	I1216 04:39:03.428295  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.428302  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:03.428308  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:03.428365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:03.456250  481598 cri.go:89] found id: ""
	I1216 04:39:03.456267  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.456274  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:03.456280  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:03.456353  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:03.482051  481598 cri.go:89] found id: ""
	I1216 04:39:03.482064  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.482071  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:03.482077  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:03.482137  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:03.511578  481598 cri.go:89] found id: ""
	I1216 04:39:03.511594  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.511601  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:03.511606  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:03.511664  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:03.540839  481598 cri.go:89] found id: ""
	I1216 04:39:03.540853  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.540860  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:03.540866  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:03.540921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:03.567087  481598 cri.go:89] found id: ""
	I1216 04:39:03.567103  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.567111  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:03.567119  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:03.567131  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:03.633316  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:03.633338  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.648697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:03.648714  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:03.714118  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:03.714128  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:03.714140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:03.784197  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:03.784219  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.317384  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:06.328685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:06.328743  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:06.355872  481598 cri.go:89] found id: ""
	I1216 04:39:06.355887  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.355893  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:06.355907  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:06.355964  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:06.386605  481598 cri.go:89] found id: ""
	I1216 04:39:06.386619  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.386626  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:06.386631  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:06.386696  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:06.412102  481598 cri.go:89] found id: ""
	I1216 04:39:06.412117  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.412132  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:06.412137  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:06.412209  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:06.437654  481598 cri.go:89] found id: ""
	I1216 04:39:06.437669  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.437676  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:06.437681  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:06.437752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:06.466130  481598 cri.go:89] found id: ""
	I1216 04:39:06.466145  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.466151  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:06.466156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:06.466219  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:06.491149  481598 cri.go:89] found id: ""
	I1216 04:39:06.491163  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.491170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:06.491176  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:06.491236  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:06.517113  481598 cri.go:89] found id: ""
	I1216 04:39:06.517127  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.517134  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:06.517141  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:06.517165  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:06.532219  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:06.532236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:06.610459  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:06.610469  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:06.610480  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:06.678489  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:06.678509  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.713694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:06.713710  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.281978  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:09.291972  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:09.292040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:09.318986  481598 cri.go:89] found id: ""
	I1216 04:39:09.319002  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.319009  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:09.319014  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:09.319080  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:09.355810  481598 cri.go:89] found id: ""
	I1216 04:39:09.355823  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.355848  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:09.355853  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:09.355917  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:09.386910  481598 cri.go:89] found id: ""
	I1216 04:39:09.386939  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.386946  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:09.386951  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:09.387019  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:09.415820  481598 cri.go:89] found id: ""
	I1216 04:39:09.415834  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.415841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:09.415846  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:09.415902  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:09.441866  481598 cri.go:89] found id: ""
	I1216 04:39:09.441881  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.441888  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:09.441892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:09.441956  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:09.467703  481598 cri.go:89] found id: ""
	I1216 04:39:09.467718  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.467724  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:09.467730  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:09.467790  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:09.494307  481598 cri.go:89] found id: ""
	I1216 04:39:09.494322  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.494329  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:09.494336  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:09.494346  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:09.521531  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:09.521549  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.587441  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:09.587464  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:09.602275  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:09.602291  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:09.664727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:09.664737  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:09.664748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.233947  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:12.245865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:12.245923  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:12.270410  481598 cri.go:89] found id: ""
	I1216 04:39:12.270425  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.270431  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:12.270437  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:12.270513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:12.295309  481598 cri.go:89] found id: ""
	I1216 04:39:12.295323  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.295330  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:12.295334  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:12.295391  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:12.326327  481598 cri.go:89] found id: ""
	I1216 04:39:12.326342  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.326349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:12.326354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:12.326415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:12.358181  481598 cri.go:89] found id: ""
	I1216 04:39:12.358196  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.358203  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:12.358208  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:12.358309  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:12.390281  481598 cri.go:89] found id: ""
	I1216 04:39:12.390296  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.390303  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:12.390308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:12.390365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:12.419429  481598 cri.go:89] found id: ""
	I1216 04:39:12.419444  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.419451  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:12.419456  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:12.419512  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:12.445137  481598 cri.go:89] found id: ""
	I1216 04:39:12.445151  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.445159  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:12.445167  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:12.445177  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:12.510786  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:12.510805  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:12.525785  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:12.525801  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:12.590602  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:12.590616  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:12.590627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.664304  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:12.664331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:15.192618  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:15.202786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:15.202855  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:15.227787  481598 cri.go:89] found id: ""
	I1216 04:39:15.227801  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.227808  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:15.227813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:15.227875  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:15.254490  481598 cri.go:89] found id: ""
	I1216 04:39:15.254505  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.254512  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:15.254517  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:15.254578  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:15.280037  481598 cri.go:89] found id: ""
	I1216 04:39:15.280052  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.280060  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:15.280064  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:15.280124  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:15.306278  481598 cri.go:89] found id: ""
	I1216 04:39:15.306295  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.306303  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:15.306308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:15.306368  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:15.338132  481598 cri.go:89] found id: ""
	I1216 04:39:15.338146  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.338152  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:15.338157  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:15.338215  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:15.365557  481598 cri.go:89] found id: ""
	I1216 04:39:15.365571  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.365578  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:15.365583  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:15.365640  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:15.394440  481598 cri.go:89] found id: ""
	I1216 04:39:15.394454  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.394461  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:15.394469  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:15.394478  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:15.460219  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:15.460240  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:15.475344  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:15.475362  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:15.543524  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:15.543542  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:15.543552  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:15.611736  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:15.611757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:18.147208  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:18.157570  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:18.157629  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:18.182325  481598 cri.go:89] found id: ""
	I1216 04:39:18.182339  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.182346  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:18.182351  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:18.182409  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:18.211344  481598 cri.go:89] found id: ""
	I1216 04:39:18.211358  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.211365  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:18.211370  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:18.211430  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:18.236501  481598 cri.go:89] found id: ""
	I1216 04:39:18.236518  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.236525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:18.236533  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:18.236600  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:18.261000  481598 cri.go:89] found id: ""
	I1216 04:39:18.261013  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.261020  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:18.261025  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:18.261112  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:18.286887  481598 cri.go:89] found id: ""
	I1216 04:39:18.286901  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.286908  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:18.286913  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:18.286970  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:18.311492  481598 cri.go:89] found id: ""
	I1216 04:39:18.311506  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.311514  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:18.311519  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:18.311577  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:18.350624  481598 cri.go:89] found id: ""
	I1216 04:39:18.350638  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.350645  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:18.350652  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:18.350663  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:18.424437  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:18.424461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:18.439409  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:18.439425  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:18.503408  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:18.503426  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:18.503439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:18.572236  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:18.572256  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.099923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:21.109895  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:21.109959  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:21.135095  481598 cri.go:89] found id: ""
	I1216 04:39:21.135110  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.135117  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:21.135122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:21.135188  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:21.159978  481598 cri.go:89] found id: ""
	I1216 04:39:21.159991  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.159998  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:21.160002  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:21.160060  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:21.184861  481598 cri.go:89] found id: ""
	I1216 04:39:21.184875  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.184882  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:21.184887  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:21.184943  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:21.215362  481598 cri.go:89] found id: ""
	I1216 04:39:21.215376  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.215383  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:21.215388  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:21.215451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:21.241352  481598 cri.go:89] found id: ""
	I1216 04:39:21.241366  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.241373  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:21.241378  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:21.241435  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:21.270124  481598 cri.go:89] found id: ""
	I1216 04:39:21.270139  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.270146  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:21.270151  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:21.270210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:21.294836  481598 cri.go:89] found id: ""
	I1216 04:39:21.294850  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.294857  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:21.294865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:21.294876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.340249  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:21.340265  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:21.415950  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:21.415975  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:21.431603  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:21.431619  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:21.496240  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:21.496250  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:21.496260  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.064476  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:24.075218  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:24.075282  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:24.100790  481598 cri.go:89] found id: ""
	I1216 04:39:24.100804  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.100810  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:24.100815  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:24.100870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:24.127285  481598 cri.go:89] found id: ""
	I1216 04:39:24.127301  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.127308  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:24.127312  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:24.127371  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:24.156427  481598 cri.go:89] found id: ""
	I1216 04:39:24.156440  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.156447  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:24.156452  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:24.156513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:24.182130  481598 cri.go:89] found id: ""
	I1216 04:39:24.182146  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.182154  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:24.182159  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:24.182216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:24.207363  481598 cri.go:89] found id: ""
	I1216 04:39:24.207378  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.207385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:24.207390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:24.207451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:24.235986  481598 cri.go:89] found id: ""
	I1216 04:39:24.236001  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.236017  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:24.236022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:24.236077  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:24.260561  481598 cri.go:89] found id: ""
	I1216 04:39:24.260582  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.260589  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:24.260597  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:24.260608  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.328717  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:24.328738  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:24.362340  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:24.362357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:24.435463  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:24.435483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:24.452196  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:24.452212  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:24.517484  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.018375  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:27.028921  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:27.028982  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:27.058968  481598 cri.go:89] found id: ""
	I1216 04:39:27.058984  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.058991  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:27.058996  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:27.059058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:27.086788  481598 cri.go:89] found id: ""
	I1216 04:39:27.086802  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.086808  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:27.086815  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:27.086872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:27.111593  481598 cri.go:89] found id: ""
	I1216 04:39:27.111607  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.111629  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:27.111635  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:27.111700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:27.135786  481598 cri.go:89] found id: ""
	I1216 04:39:27.135800  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.135816  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:27.135822  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:27.135881  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:27.175564  481598 cri.go:89] found id: ""
	I1216 04:39:27.175577  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.175593  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:27.175598  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:27.175670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:27.201020  481598 cri.go:89] found id: ""
	I1216 04:39:27.201034  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.201041  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:27.201048  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:27.201123  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:27.226608  481598 cri.go:89] found id: ""
	I1216 04:39:27.226622  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.226629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:27.226637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:27.226648  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:27.292121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:27.292140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:27.307824  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:27.307840  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:27.382707  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.382717  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:27.382728  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:27.450745  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:27.450764  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:29.981824  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:29.991752  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:29.991812  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:30.027720  481598 cri.go:89] found id: ""
	I1216 04:39:30.027737  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.027744  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:30.027749  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:30.027824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:30.064834  481598 cri.go:89] found id: ""
	I1216 04:39:30.064862  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.064869  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:30.064875  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:30.064942  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:30.092327  481598 cri.go:89] found id: ""
	I1216 04:39:30.092341  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.092349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:30.092354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:30.092415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:30.119568  481598 cri.go:89] found id: ""
	I1216 04:39:30.119583  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.119590  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:30.119595  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:30.119654  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:30.145948  481598 cri.go:89] found id: ""
	I1216 04:39:30.145962  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.145970  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:30.145974  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:30.146037  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:30.174055  481598 cri.go:89] found id: ""
	I1216 04:39:30.174069  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.174077  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:30.174082  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:30.174148  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:30.200676  481598 cri.go:89] found id: ""
	I1216 04:39:30.200704  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.200711  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:30.200719  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:30.200729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:30.273177  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:30.273199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:30.307730  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:30.307749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:30.380128  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:30.380149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:30.398650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:30.398668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:30.464666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:32.965244  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:32.975770  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:32.975829  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:33.008069  481598 cri.go:89] found id: ""
	I1216 04:39:33.008086  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.008094  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:33.008099  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:33.008180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:33.035228  481598 cri.go:89] found id: ""
	I1216 04:39:33.035242  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.035249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:33.035254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:33.035319  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:33.062504  481598 cri.go:89] found id: ""
	I1216 04:39:33.062518  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.062525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:33.062530  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:33.062588  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:33.088441  481598 cri.go:89] found id: ""
	I1216 04:39:33.088455  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.088462  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:33.088467  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:33.088529  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:33.119260  481598 cri.go:89] found id: ""
	I1216 04:39:33.119274  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.119281  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:33.119286  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:33.119346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:33.150552  481598 cri.go:89] found id: ""
	I1216 04:39:33.150567  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.150575  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:33.150580  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:33.150644  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:33.180001  481598 cri.go:89] found id: ""
	I1216 04:39:33.180016  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.180023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:33.180030  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:33.180040  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:33.248727  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:33.248752  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:33.277683  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:33.277700  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:33.350702  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:33.350721  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:33.369208  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:33.369248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:33.439765  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:35.940031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:35.950049  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:35.950107  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:35.975196  481598 cri.go:89] found id: ""
	I1216 04:39:35.975209  481598 logs.go:282] 0 containers: []
	W1216 04:39:35.975216  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:35.975221  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:35.975277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:36.001797  481598 cri.go:89] found id: ""
	I1216 04:39:36.001812  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.001820  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:36.001826  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:36.001890  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:36.036431  481598 cri.go:89] found id: ""
	I1216 04:39:36.036446  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.036454  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:36.036459  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:36.036525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:36.063963  481598 cri.go:89] found id: ""
	I1216 04:39:36.063978  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.063985  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:36.063990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:36.064048  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:36.090639  481598 cri.go:89] found id: ""
	I1216 04:39:36.090653  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.090660  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:36.090665  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:36.090724  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:36.116793  481598 cri.go:89] found id: ""
	I1216 04:39:36.116807  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.116816  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:36.116821  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:36.116880  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:36.141959  481598 cri.go:89] found id: ""
	I1216 04:39:36.141972  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.141979  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:36.141986  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:36.141996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:36.208976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:36.208996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:36.239530  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:36.239546  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:36.305220  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:36.305245  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:36.322139  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:36.322169  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:36.399936  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:38.900194  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:38.910569  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:38.910632  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:38.936840  481598 cri.go:89] found id: ""
	I1216 04:39:38.936854  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.936861  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:38.936867  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:38.936926  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:38.969994  481598 cri.go:89] found id: ""
	I1216 04:39:38.970008  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.970016  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:38.970021  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:38.970092  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:39.000246  481598 cri.go:89] found id: ""
	I1216 04:39:39.000260  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.000267  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:39.000272  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:39.000328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:39.028053  481598 cri.go:89] found id: ""
	I1216 04:39:39.028068  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.028075  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:39.028080  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:39.028139  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:39.053044  481598 cri.go:89] found id: ""
	I1216 04:39:39.053058  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.053100  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:39.053107  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:39.053165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:39.078212  481598 cri.go:89] found id: ""
	I1216 04:39:39.078226  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.078234  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:39.078239  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:39.078296  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:39.103968  481598 cri.go:89] found id: ""
	I1216 04:39:39.103982  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.103994  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:39.104001  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:39.104011  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:39.171261  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:39.171283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:39.203918  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:39.203937  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:39.269162  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:39.269183  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:39.283640  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:39.283658  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:39.357490  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:41.857783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:41.868156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:41.868218  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:41.896097  481598 cri.go:89] found id: ""
	I1216 04:39:41.896111  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.896118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:41.896123  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:41.896183  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:41.923730  481598 cri.go:89] found id: ""
	I1216 04:39:41.923745  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.923752  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:41.923758  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:41.923814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:41.948996  481598 cri.go:89] found id: ""
	I1216 04:39:41.949010  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.949017  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:41.949022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:41.949098  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:41.973820  481598 cri.go:89] found id: ""
	I1216 04:39:41.973834  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.973841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:41.973845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:41.973901  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:41.999809  481598 cri.go:89] found id: ""
	I1216 04:39:41.999832  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.999839  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:41.999845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:41.999910  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:42.032190  481598 cri.go:89] found id: ""
	I1216 04:39:42.032216  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.032224  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:42.032229  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:42.032301  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:42.059655  481598 cri.go:89] found id: ""
	I1216 04:39:42.059679  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.059687  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:42.059694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:42.059705  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:42.127853  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:42.127875  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:42.146370  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:42.146393  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:42.223415  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:42.223444  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:42.223457  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:42.304338  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:42.304368  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:44.847911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:44.858741  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:44.858820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:44.884095  481598 cri.go:89] found id: ""
	I1216 04:39:44.884110  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.884118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:44.884122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:44.884181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:44.911877  481598 cri.go:89] found id: ""
	I1216 04:39:44.911891  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.911898  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:44.911902  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:44.911960  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:44.938117  481598 cri.go:89] found id: ""
	I1216 04:39:44.938132  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.938139  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:44.938144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:44.938204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:44.972779  481598 cri.go:89] found id: ""
	I1216 04:39:44.972793  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.972800  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:44.972805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:44.972862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:45.000033  481598 cri.go:89] found id: ""
	I1216 04:39:45.000047  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.000054  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:45.000060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:45.000121  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:45.072214  481598 cri.go:89] found id: ""
	I1216 04:39:45.072234  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.072244  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:45.072250  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:45.072325  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:45.112612  481598 cri.go:89] found id: ""
	I1216 04:39:45.112632  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.112641  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:45.112653  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:45.112668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:45.193381  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:45.193407  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:45.244205  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:45.244225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:45.324983  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:45.325004  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:45.340857  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:45.340880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:45.423270  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:47.923526  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:47.933779  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:47.933853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:47.960777  481598 cri.go:89] found id: ""
	I1216 04:39:47.960793  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.960800  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:47.960804  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:47.960863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:47.990010  481598 cri.go:89] found id: ""
	I1216 04:39:47.990024  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.990031  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:47.990036  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:47.990094  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:48.021881  481598 cri.go:89] found id: ""
	I1216 04:39:48.021897  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.021908  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:48.021914  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:48.021978  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:48.048841  481598 cri.go:89] found id: ""
	I1216 04:39:48.048860  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.048867  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:48.048872  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:48.048947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:48.074988  481598 cri.go:89] found id: ""
	I1216 04:39:48.075002  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.075010  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:48.075015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:48.075073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:48.101288  481598 cri.go:89] found id: ""
	I1216 04:39:48.101303  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.101320  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:48.101325  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:48.101383  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:48.126469  481598 cri.go:89] found id: ""
	I1216 04:39:48.126483  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.126489  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:48.126497  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:48.126508  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:48.160206  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:48.160222  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:48.226864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:48.226883  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:48.241861  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:48.241879  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:48.311183  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:48.311197  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:48.311208  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:50.890106  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:50.900561  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:50.900623  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:50.925477  481598 cri.go:89] found id: ""
	I1216 04:39:50.925491  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.925498  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:50.925503  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:50.925573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:50.950590  481598 cri.go:89] found id: ""
	I1216 04:39:50.950604  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.950611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:50.950615  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:50.950670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:50.975563  481598 cri.go:89] found id: ""
	I1216 04:39:50.975577  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.975584  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:50.975588  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:50.975649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:51.001446  481598 cri.go:89] found id: ""
	I1216 04:39:51.001460  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.001468  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:51.001473  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:51.001546  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:51.036808  481598 cri.go:89] found id: ""
	I1216 04:39:51.036822  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.036830  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:51.036834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:51.036893  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:51.063122  481598 cri.go:89] found id: ""
	I1216 04:39:51.063136  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.063143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:51.063148  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:51.063204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:51.091909  481598 cri.go:89] found id: ""
	I1216 04:39:51.091924  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.091931  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:51.091938  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:51.091949  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:51.157330  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:51.157357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:51.172521  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:51.172537  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:51.237104  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:51.237115  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:51.237126  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:51.310463  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:51.310484  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:53.856519  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:53.866849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:53.866907  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:53.892183  481598 cri.go:89] found id: ""
	I1216 04:39:53.892197  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.892204  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:53.892210  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:53.892269  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:53.917961  481598 cri.go:89] found id: ""
	I1216 04:39:53.917975  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.917983  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:53.917987  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:53.918046  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:53.943214  481598 cri.go:89] found id: ""
	I1216 04:39:53.943228  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.943235  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:53.943240  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:53.943298  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:53.968696  481598 cri.go:89] found id: ""
	I1216 04:39:53.968710  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.968717  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:53.968722  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:53.968778  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:53.993878  481598 cri.go:89] found id: ""
	I1216 04:39:53.993892  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.993900  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:53.993905  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:53.993961  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:54.021892  481598 cri.go:89] found id: ""
	I1216 04:39:54.021911  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.021918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:54.021924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:54.021989  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:54.048339  481598 cri.go:89] found id: ""
	I1216 04:39:54.048353  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.048360  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:54.048368  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:54.048379  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:54.115518  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:54.115529  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:54.115540  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:54.184110  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:54.184130  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:54.212611  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:54.212627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:54.280294  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:54.280314  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:56.795621  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:56.805834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:56.805904  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:56.831835  481598 cri.go:89] found id: ""
	I1216 04:39:56.831850  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.831857  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:56.831862  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:56.831920  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:56.857986  481598 cri.go:89] found id: ""
	I1216 04:39:56.858000  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.858007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:56.858012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:56.858086  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:56.884049  481598 cri.go:89] found id: ""
	I1216 04:39:56.884062  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.884069  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:56.884074  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:56.884129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:56.909467  481598 cri.go:89] found id: ""
	I1216 04:39:56.909481  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.909488  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:56.909493  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:56.909553  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:56.935361  481598 cri.go:89] found id: ""
	I1216 04:39:56.935375  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.935382  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:56.935387  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:56.935444  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:56.963724  481598 cri.go:89] found id: ""
	I1216 04:39:56.963738  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.963745  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:56.963750  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:56.963807  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:56.988482  481598 cri.go:89] found id: ""
	I1216 04:39:56.988495  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.988502  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:56.988510  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:56.988520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:57.057566  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:57.057587  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:57.073142  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:57.073160  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:57.138961  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:57.138972  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:57.138983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:57.206475  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:57.206497  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:59.739022  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:59.749638  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:59.749700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:59.776094  481598 cri.go:89] found id: ""
	I1216 04:39:59.776109  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.776115  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:59.776120  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:59.776180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:59.802606  481598 cri.go:89] found id: ""
	I1216 04:39:59.802621  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.802628  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:59.802634  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:59.802697  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:59.829710  481598 cri.go:89] found id: ""
	I1216 04:39:59.829724  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.829731  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:59.829736  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:59.829808  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:59.859658  481598 cri.go:89] found id: ""
	I1216 04:39:59.859673  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.859680  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:59.859685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:59.859742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:59.884817  481598 cri.go:89] found id: ""
	I1216 04:39:59.884831  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.884838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:59.884843  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:59.884906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:59.911195  481598 cri.go:89] found id: ""
	I1216 04:39:59.911210  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.911217  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:59.911223  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:59.911283  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:59.936870  481598 cri.go:89] found id: ""
	I1216 04:39:59.936885  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.936891  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:59.936899  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:59.936909  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:00.003032  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:00.003054  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:00.086753  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:00.086772  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:00.242338  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:00.242351  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:00.242395  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:00.380976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:00.381000  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:02.964729  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:02.974990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:02.975051  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:03.001443  481598 cri.go:89] found id: ""
	I1216 04:40:03.001458  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.001466  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:03.001471  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:03.001538  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:03.030227  481598 cri.go:89] found id: ""
	I1216 04:40:03.030241  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.030249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:03.030254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:03.030315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:03.056406  481598 cri.go:89] found id: ""
	I1216 04:40:03.056421  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.056429  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:03.056439  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:03.056500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:03.084430  481598 cri.go:89] found id: ""
	I1216 04:40:03.084452  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.084460  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:03.084465  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:03.084527  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:03.112058  481598 cri.go:89] found id: ""
	I1216 04:40:03.112072  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.112079  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:03.112084  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:03.112150  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:03.139147  481598 cri.go:89] found id: ""
	I1216 04:40:03.139161  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.139168  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:03.139173  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:03.139231  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:03.170943  481598 cri.go:89] found id: ""
	I1216 04:40:03.170958  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.170965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:03.170973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:03.170984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:03.237388  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:03.237409  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:03.252191  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:03.252213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:03.315123  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:03.315132  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:03.315143  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:03.388848  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:03.388869  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:05.923315  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:05.934216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:05.934292  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:05.964778  481598 cri.go:89] found id: ""
	I1216 04:40:05.964791  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.964798  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:05.964813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:05.964895  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:05.991403  481598 cri.go:89] found id: ""
	I1216 04:40:05.991417  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.991424  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:05.991429  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:05.991486  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:06.019838  481598 cri.go:89] found id: ""
	I1216 04:40:06.019853  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.019860  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:06.019865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:06.019927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:06.046554  481598 cri.go:89] found id: ""
	I1216 04:40:06.046569  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.046580  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:06.046585  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:06.046649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:06.071958  481598 cri.go:89] found id: ""
	I1216 04:40:06.071973  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.071980  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:06.071985  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:06.072040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:06.099079  481598 cri.go:89] found id: ""
	I1216 04:40:06.099094  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.099101  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:06.099106  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:06.099170  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:06.126168  481598 cri.go:89] found id: ""
	I1216 04:40:06.126188  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.126195  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:06.126202  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:06.126213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:06.192591  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:06.192611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:06.207708  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:06.207729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:06.274064  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:06.274074  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:06.274086  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:06.343044  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:06.343066  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:08.873218  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:08.883654  481598 kubeadm.go:602] duration metric: took 4m3.325303057s to restartPrimaryControlPlane
	W1216 04:40:08.883714  481598 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 04:40:08.883788  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:40:09.294329  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:40:09.307484  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:40:09.315713  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:40:09.315769  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:40:09.323612  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:40:09.323622  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:40:09.323675  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:40:09.331783  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:40:09.331838  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:40:09.339284  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:40:09.346837  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:40:09.346891  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:40:09.354493  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.362269  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:40:09.362328  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.369970  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:40:09.378044  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:40:09.378103  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:40:09.385765  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:40:09.424060  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:40:09.424358  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:40:09.495076  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:40:09.495141  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:40:09.495181  481598 kubeadm.go:319] OS: Linux
	I1216 04:40:09.495224  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:40:09.495271  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:40:09.495318  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:40:09.495365  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:40:09.495412  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:40:09.495459  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:40:09.495502  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:40:09.495550  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:40:09.495596  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:40:09.563458  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:40:09.563582  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:40:09.563682  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:40:09.571744  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:40:09.577424  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:40:09.577526  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:40:09.577597  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:40:09.577679  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:40:09.577744  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:40:09.577819  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:40:09.577878  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:40:09.577951  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:40:09.578022  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:40:09.578105  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:40:09.578188  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:40:09.578235  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:40:09.578291  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:40:09.899760  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:40:10.102481  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:40:10.266020  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:40:10.669469  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:40:11.526452  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:40:11.527018  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:40:11.530635  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:40:11.533764  481598 out.go:252]   - Booting up control plane ...
	I1216 04:40:11.533860  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:40:11.533937  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:40:11.534462  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:40:11.549423  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:40:11.549689  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:40:11.557342  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:40:11.557601  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:40:11.557642  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:40:11.689632  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:40:11.689752  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:44:11.687962  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001213504s
	I1216 04:44:11.687985  481598 kubeadm.go:319] 
	I1216 04:44:11.688045  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:44:11.688077  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:44:11.688181  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:44:11.688185  481598 kubeadm.go:319] 
	I1216 04:44:11.688293  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:44:11.688324  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:44:11.688354  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:44:11.688357  481598 kubeadm.go:319] 
	I1216 04:44:11.693131  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:11.693558  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:11.693669  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:44:11.693904  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:44:11.693910  481598 kubeadm.go:319] 
	I1216 04:44:11.693977  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:44:11.694089  481598 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001213504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:44:11.694190  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:44:12.104466  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:44:12.116829  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:44:12.116881  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:44:12.124364  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:44:12.124372  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:44:12.124420  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:44:12.131751  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:44:12.131807  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:44:12.138938  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:44:12.146429  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:44:12.146482  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:44:12.153782  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.161218  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:44:12.161270  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.168781  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:44:12.176219  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:44:12.176271  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:44:12.183435  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:44:12.295783  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:12.296200  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:12.361811  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:48:14.074988  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 04:48:14.075012  481598 kubeadm.go:319] 
	I1216 04:48:14.075081  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:48:14.079141  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:48:14.079195  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:48:14.079284  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:48:14.079338  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:48:14.079372  481598 kubeadm.go:319] OS: Linux
	I1216 04:48:14.079416  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:48:14.079463  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:48:14.079508  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:48:14.079555  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:48:14.079602  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:48:14.079664  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:48:14.079709  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:48:14.079755  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:48:14.079801  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:48:14.079872  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:48:14.079966  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:48:14.080055  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:48:14.080117  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:48:14.083166  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:48:14.083255  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:48:14.083327  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:48:14.083402  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:48:14.083461  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:48:14.083529  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:48:14.083582  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:48:14.083644  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:48:14.083704  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:48:14.083778  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:48:14.083849  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:48:14.083886  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:48:14.083941  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:48:14.083991  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:48:14.084046  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:48:14.084103  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:48:14.084165  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:48:14.084218  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:48:14.084301  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:48:14.084366  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:48:14.087214  481598 out.go:252]   - Booting up control plane ...
	I1216 04:48:14.087326  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:48:14.087404  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:48:14.087497  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:48:14.087610  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:48:14.087707  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:48:14.087811  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:48:14.087895  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:48:14.087932  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:48:14.088082  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:48:14.088189  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:48:14.088268  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077674s
	I1216 04:48:14.088271  481598 kubeadm.go:319] 
	I1216 04:48:14.088334  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:48:14.088366  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:48:14.088482  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:48:14.088486  481598 kubeadm.go:319] 
	I1216 04:48:14.088595  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:48:14.088637  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:48:14.088668  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:48:14.088677  481598 kubeadm.go:319] 
	I1216 04:48:14.088733  481598 kubeadm.go:403] duration metric: took 12m8.569239535s to StartCluster
	I1216 04:48:14.088763  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:48:14.088824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:48:14.121113  481598 cri.go:89] found id: ""
	I1216 04:48:14.121140  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.121148  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:48:14.121153  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:48:14.121210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:48:14.150916  481598 cri.go:89] found id: ""
	I1216 04:48:14.150931  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.150938  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:48:14.150943  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:48:14.151005  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:48:14.177693  481598 cri.go:89] found id: ""
	I1216 04:48:14.177709  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.177716  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:48:14.177721  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:48:14.177782  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:48:14.202900  481598 cri.go:89] found id: ""
	I1216 04:48:14.202914  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.202921  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:48:14.202926  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:48:14.202983  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:48:14.229346  481598 cri.go:89] found id: ""
	I1216 04:48:14.229360  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.229367  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:48:14.229372  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:48:14.229433  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:48:14.255869  481598 cri.go:89] found id: ""
	I1216 04:48:14.255884  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.255891  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:48:14.255896  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:48:14.255953  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:48:14.282757  481598 cri.go:89] found id: ""
	I1216 04:48:14.282772  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.282779  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:48:14.282787  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:48:14.282797  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:48:14.349482  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:48:14.349503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:48:14.364748  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:48:14.364765  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:48:14.440728  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:48:14.440741  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:48:14.440751  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:48:14.515072  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:48:14.515092  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:48:14.544694  481598 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:48:14.544736  481598 out.go:285] * 
	W1216 04:48:14.544844  481598 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.544900  481598 out.go:285] * 
	W1216 04:48:14.547108  481598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:48:14.553105  481598 out.go:203] 
	W1216 04:48:14.555966  481598 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.556016  481598 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:48:14.556038  481598 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:48:14.559052  481598 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714709668Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.71475743Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714823679Z" level=info msg="Create NRI interface"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714952197Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714978487Z" level=info msg="runtime interface created"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714994996Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715003956Z" level=info msg="runtime interface starting up..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715015205Z" level=info msg="starting plugins..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715027849Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715097331Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:36:03 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.566937768Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5b381738-c32a-40c6-affb-c4aad9d726b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.567803155Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=7302f23d-29b3-4ddc-ad63-9af170663562 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.568336568Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=470a4814-2c77-4f21-97ca-d4b2d8b367c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.56886276Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3d63019-6956-4b8d-9795-5e45ed470016 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.569572699Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1715eb88-0ece-47e1-8cf4-08ec329b9548 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570118822Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=17ac1632-ceef-4623-82d4-95709ece00f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570664255Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=9e736680-8e53-4709-9714-232fbfa617ef name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.365457664Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=66aba16f-2286-4957-9589-3f6b308f0653 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366373784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a0b09546-fe1b-440e-8076-598a1e2930d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366892723Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=976ba277-fbb2-4db1-8ee0-ce87f329b2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367464412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15d708f7-0c1f-4e61-bde7-afc75b1dc430 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367935941Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d28f296-8f48-4bb2-bf27-13281f9a3b27 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368429435Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=82541142-23b6-4f48-816e-5b740356cd35 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368875848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=29b0dee6-8ec8-4ecc-822d-bf19bcc0e034 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:50:17.925652   23280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:17.926056   23280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:17.927767   23280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:17.928423   23280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:17.930239   23280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:50:17 up  3:32,  0 user,  load average: 1.03, 0.46, 0.52
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:50:15 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:16 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1124.
	Dec 16 04:50:16 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:16 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:16 functional-763073 kubelet[23153]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:16 functional-763073 kubelet[23153]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:16 functional-763073 kubelet[23153]: E1216 04:50:16.343864   23153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:16 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:16 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:17 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1125.
	Dec 16 04:50:17 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:17 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:17 functional-763073 kubelet[23190]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:17 functional-763073 kubelet[23190]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:17 functional-763073 kubelet[23190]: E1216 04:50:17.148063   23190 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:17 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:17 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:17 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1126.
	Dec 16 04:50:17 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:17 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:17 functional-763073 kubelet[23270]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:17 functional-763073 kubelet[23270]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:17 functional-763073 kubelet[23270]: E1216 04:50:17.891474   23270 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:17 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:17 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (349.967379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-763073 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-763073 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (57.789183ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-763073 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-763073 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-763073 describe po hello-node-connect: exit status 1 (57.999458ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-763073 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-763073 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-763073 logs -l app=hello-node-connect: exit status 1 (61.092281ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-763073 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-763073 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-763073 describe svc hello-node-connect: exit status 1 (60.693548ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-763073 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (316.571391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ cache   │ functional-763073 cache reload                                                                                                                               │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ ssh     │ functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │ 16 Dec 25 04:35 UTC │
	│ kubectl │ functional-763073 kubectl -- --context functional-763073 get pods                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:35 UTC │                     │
	│ start   │ -p functional-763073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:36 UTC │                     │
	│ cp      │ functional-763073 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ config  │ functional-763073 config unset cpus                                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ config  │ functional-763073 config get cpus                                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │                     │
	│ config  │ functional-763073 config set cpus 2                                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ config  │ functional-763073 config get cpus                                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ config  │ functional-763073 config unset cpus                                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ ssh     │ functional-763073 ssh -n functional-763073 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ config  │ functional-763073 config get cpus                                                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │                     │
	│ ssh     │ functional-763073 ssh echo hello                                                                                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ cp      │ functional-763073 cp functional-763073:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1427695800/001/cp-test.txt │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ ssh     │ functional-763073 ssh cat /etc/hostname                                                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ ssh     │ functional-763073 ssh -n functional-763073 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ tunnel  │ functional-763073 tunnel --alsologtostderr                                                                                                                   │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │                     │
	│ tunnel  │ functional-763073 tunnel --alsologtostderr                                                                                                                   │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │                     │
	│ cp      │ functional-763073 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ ssh     │ functional-763073 ssh -n functional-763073 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:48 UTC │ 16 Dec 25 04:48 UTC │
	│ addons  │ functional-763073 addons list                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ addons  │ functional-763073 addons list -o json                                                                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:36:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:36:00.490248  481598 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:36:00.490394  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490398  481598 out.go:374] Setting ErrFile to fd 2...
	I1216 04:36:00.490402  481598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:36:00.490827  481598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:36:00.491840  481598 out.go:368] Setting JSON to false
	I1216 04:36:00.492932  481598 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11907,"bootTime":1765847854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:36:00.493015  481598 start.go:143] virtualization:  
	I1216 04:36:00.496736  481598 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:36:00.500271  481598 notify.go:221] Checking for updates...
	I1216 04:36:00.500857  481598 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:36:00.504041  481598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:36:00.507246  481598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:36:00.510546  481598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:36:00.513957  481598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:36:00.517802  481598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:36:00.521529  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:00.521658  481598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:36:00.547571  481598 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:36:00.547683  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.612217  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.602438298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.612309  481598 docker.go:319] overlay module found
	I1216 04:36:00.615642  481598 out.go:179] * Using the docker driver based on existing profile
	I1216 04:36:00.618516  481598 start.go:309] selected driver: docker
	I1216 04:36:00.618544  481598 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.618637  481598 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:36:00.618758  481598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:36:00.679148  481598 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-16 04:36:00.669430398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:36:00.679575  481598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:36:00.679604  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:00.679655  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:00.679698  481598 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:00.682841  481598 out.go:179] * Starting "functional-763073" primary control-plane node in "functional-763073" cluster
	I1216 04:36:00.685829  481598 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:36:00.688866  481598 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:36:00.691890  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:00.691964  481598 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:36:00.691972  481598 cache.go:65] Caching tarball of preloaded images
	I1216 04:36:00.691982  481598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:36:00.692074  481598 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 04:36:00.692084  481598 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:36:00.692227  481598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/config.json ...
	I1216 04:36:00.712798  481598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:36:00.712810  481598 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 04:36:00.712824  481598 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:36:00.712856  481598 start.go:360] acquireMachinesLock for functional-763073: {Name:mk37f96bdb0feffde12ec58bbc71256d58abc2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:36:00.712923  481598 start.go:364] duration metric: took 39.237µs to acquireMachinesLock for "functional-763073"
	I1216 04:36:00.712941  481598 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:36:00.712958  481598 fix.go:54] fixHost starting: 
	I1216 04:36:00.713253  481598 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
	I1216 04:36:00.732242  481598 fix.go:112] recreateIfNeeded on functional-763073: state=Running err=<nil>
	W1216 04:36:00.732263  481598 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:36:00.735664  481598 out.go:252] * Updating the running docker "functional-763073" container ...
	I1216 04:36:00.735723  481598 machine.go:94] provisionDockerMachine start ...
	I1216 04:36:00.735809  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.753493  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.753813  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.753819  481598 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:36:00.888929  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:00.888952  481598 ubuntu.go:182] provisioning hostname "functional-763073"
	I1216 04:36:00.889028  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:00.908330  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:00.908643  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:00.908652  481598 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-763073 && echo "functional-763073" | sudo tee /etc/hostname
	I1216 04:36:01.055703  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-763073
	
	I1216 04:36:01.055772  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.082824  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.083159  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.083173  481598 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-763073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-763073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-763073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:36:01.221846  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:36:01.221862  481598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 04:36:01.221883  481598 ubuntu.go:190] setting up certificates
	I1216 04:36:01.221900  481598 provision.go:84] configureAuth start
	I1216 04:36:01.221962  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:01.240557  481598 provision.go:143] copyHostCerts
	I1216 04:36:01.240641  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 04:36:01.240650  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 04:36:01.240725  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 04:36:01.240821  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 04:36:01.240825  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 04:36:01.240849  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 04:36:01.240902  481598 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 04:36:01.240908  481598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 04:36:01.240929  481598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 04:36:01.240972  481598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.functional-763073 san=[127.0.0.1 192.168.49.2 functional-763073 localhost minikube]
	I1216 04:36:01.624943  481598 provision.go:177] copyRemoteCerts
	I1216 04:36:01.624996  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:36:01.625036  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.650668  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:01.753682  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:36:01.770658  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:36:01.788383  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:36:01.805726  481598 provision.go:87] duration metric: took 583.803742ms to configureAuth
	I1216 04:36:01.805744  481598 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:36:01.805933  481598 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:36:01.806039  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:01.826667  481598 main.go:143] libmachine: Using SSH client type: native
	I1216 04:36:01.826973  481598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1216 04:36:01.826985  481598 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:36:02.160545  481598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:36:02.160560  481598 machine.go:97] duration metric: took 1.424830052s to provisionDockerMachine
	I1216 04:36:02.160570  481598 start.go:293] postStartSetup for "functional-763073" (driver="docker")
	I1216 04:36:02.160582  481598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:36:02.160662  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:36:02.160707  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.182446  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.281163  481598 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:36:02.284621  481598 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:36:02.284640  481598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:36:02.284650  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 04:36:02.284704  481598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 04:36:02.284795  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 04:36:02.284876  481598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts -> hosts in /etc/test/nested/copy/441727
	I1216 04:36:02.284919  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/441727
	I1216 04:36:02.293096  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:02.311133  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts --> /etc/test/nested/copy/441727/hosts (40 bytes)
	I1216 04:36:02.329120  481598 start.go:296] duration metric: took 168.535354ms for postStartSetup
	I1216 04:36:02.329220  481598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:36:02.329269  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.348104  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.442235  481598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:36:02.448236  481598 fix.go:56] duration metric: took 1.735283267s for fixHost
	I1216 04:36:02.448253  481598 start.go:83] releasing machines lock for "functional-763073", held for 1.735323136s
	I1216 04:36:02.448324  481598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-763073
	I1216 04:36:02.466005  481598 ssh_runner.go:195] Run: cat /version.json
	I1216 04:36:02.466044  481598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:36:02.466046  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.466114  481598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
	I1216 04:36:02.490975  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.491519  481598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
	I1216 04:36:02.685578  481598 ssh_runner.go:195] Run: systemctl --version
	I1216 04:36:02.692865  481598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:36:02.731424  481598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:36:02.735810  481598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:36:02.735877  481598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:36:02.743925  481598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:36:02.743939  481598 start.go:496] detecting cgroup driver to use...
	I1216 04:36:02.743971  481598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:36:02.744017  481598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:36:02.759444  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:36:02.772624  481598 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:36:02.772678  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:36:02.788424  481598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:36:02.802435  481598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:36:02.920156  481598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:36:03.035227  481598 docker.go:234] disabling docker service ...
	I1216 04:36:03.035430  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:36:03.052008  481598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:36:03.065420  481598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:36:03.183071  481598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:36:03.294099  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:36:03.311925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:36:03.326859  481598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:36:03.326940  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.336429  481598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:36:03.336497  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.346614  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.357523  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.366947  481598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:36:03.376549  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.385465  481598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.394383  481598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:36:03.404860  481598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:36:03.413465  481598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:36:03.422752  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:03.536676  481598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:36:03.720606  481598 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:36:03.720702  481598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:36:03.724603  481598 start.go:564] Will wait 60s for crictl version
	I1216 04:36:03.724660  481598 ssh_runner.go:195] Run: which crictl
	I1216 04:36:03.728340  481598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:36:03.755140  481598 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 04:36:03.755232  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.787753  481598 ssh_runner.go:195] Run: crio --version
	I1216 04:36:03.823457  481598 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 04:36:03.826282  481598 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:36:03.843358  481598 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 04:36:03.850470  481598 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 04:36:03.853320  481598 kubeadm.go:884] updating cluster {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:36:03.853444  481598 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:36:03.853515  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.889904  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.889916  481598 crio.go:433] Images already preloaded, skipping extraction
	I1216 04:36:03.889975  481598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:36:03.917662  481598 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:36:03.917679  481598 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:36:03.917686  481598 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1216 04:36:03.917785  481598 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-763073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:36:03.917879  481598 ssh_runner.go:195] Run: crio config
	I1216 04:36:03.990629  481598 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 04:36:03.990650  481598 cni.go:84] Creating CNI manager for ""
	I1216 04:36:03.990663  481598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:36:03.990677  481598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:36:03.990700  481598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-763073 NodeName:functional-763073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:36:03.990828  481598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-763073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:36:03.990905  481598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:36:03.999067  481598 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:36:03.999139  481598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:36:04.008352  481598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1216 04:36:04.030586  481598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:36:04.045153  481598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1216 04:36:04.060527  481598 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:36:04.065456  481598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:36:04.194475  481598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:36:04.817563  481598 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073 for IP: 192.168.49.2
	I1216 04:36:04.817574  481598 certs.go:195] generating shared ca certs ...
	I1216 04:36:04.817590  481598 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:36:04.817743  481598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 04:36:04.817795  481598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 04:36:04.817801  481598 certs.go:257] generating profile certs ...
	I1216 04:36:04.817883  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.key
	I1216 04:36:04.817938  481598 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key.8a462195
	I1216 04:36:04.817975  481598 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key
	I1216 04:36:04.818092  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 04:36:04.818123  481598 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 04:36:04.818130  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:36:04.818156  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 04:36:04.818185  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:36:04.818212  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 04:36:04.818262  481598 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 04:36:04.818840  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:36:04.841132  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:36:04.865044  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:36:04.885624  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:36:04.903731  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:36:04.922117  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:36:04.940753  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:36:04.958685  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:36:04.976252  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:36:04.996895  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 04:36:05.024451  481598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 04:36:05.043756  481598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:36:05.056987  481598 ssh_runner.go:195] Run: openssl version
	I1216 04:36:05.063602  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.071513  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 04:36:05.079286  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083120  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.083179  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 04:36:05.124591  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:36:05.132537  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.139980  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:36:05.147726  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151460  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.151517  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:36:05.192644  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:36:05.200305  481598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.207653  481598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 04:36:05.215074  481598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218794  481598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.218861  481598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 04:36:05.260201  481598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:36:05.267700  481598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:36:05.271723  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:36:05.312770  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:36:05.354108  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:36:05.396136  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:36:05.437154  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:36:05.478283  481598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:36:05.519503  481598 kubeadm.go:401] StartCluster: {Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:36:05.519581  481598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:36:05.519651  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.550651  481598 cri.go:89] found id: ""
	I1216 04:36:05.550716  481598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:36:05.558332  481598 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:36:05.558341  481598 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:36:05.558398  481598 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:36:05.566851  481598 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.567385  481598 kubeconfig.go:125] found "functional-763073" server: "https://192.168.49.2:8441"
	I1216 04:36:05.568647  481598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:36:05.577205  481598 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:21:27.024069044 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 04:36:04.056943145 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 04:36:05.577214  481598 kubeadm.go:1161] stopping kube-system containers ...
	I1216 04:36:05.577232  481598 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 04:36:05.577291  481598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:36:05.613634  481598 cri.go:89] found id: ""
	I1216 04:36:05.613693  481598 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 04:36:05.631237  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:36:05.639373  481598 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 16 04:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 16 04:25 /etc/kubernetes/scheduler.conf
	
	I1216 04:36:05.639436  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:36:05.647869  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:36:05.655663  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.655719  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:36:05.663273  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.671183  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.671243  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:36:05.678591  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:36:05.686132  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:36:05.686188  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:36:05.693450  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:36:05.701540  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:05.748475  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.491126  481598 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.742626292s)
	I1216 04:36:07.491187  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.697669  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.751926  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 04:36:07.807760  481598 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:36:07.807833  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.308888  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:08.808759  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.308977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:09.808282  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.307985  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:10.808951  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.308256  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:11.808637  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.308024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:12.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.307998  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:13.808659  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.308930  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:14.808879  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.308001  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:15.808638  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.308025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:16.808728  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.308874  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:17.807914  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.308153  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:18.808033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.308758  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:19.808709  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.308226  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:20.808665  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.308593  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:21.808198  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.308415  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:22.808582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.307967  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:23.808028  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.308762  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:24.808091  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.308960  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:25.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.308423  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:26.808157  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.308038  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:27.808057  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.308023  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:28.808946  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.308972  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:29.807943  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.307922  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:30.807937  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.308667  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:31.808045  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.308212  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:32.808619  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.308733  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:33.808032  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.308860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:34.808072  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.308007  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:35.808024  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.307979  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:36.808901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.308808  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:37.808025  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.308031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:38.808882  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.308837  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:39.807987  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.307961  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:40.808950  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.308266  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:41.808923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.308656  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:42.808860  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.308034  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:43.808867  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.308569  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:44.808040  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.307977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:45.808782  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.308633  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:46.808122  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.307944  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:47.808798  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.308017  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:48.807983  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.308319  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:49.807968  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:50.807982  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.308783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:51.808921  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.308093  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:52.808677  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.308049  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:53.808424  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.308936  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:54.808179  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.308330  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:55.808590  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.308098  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:56.808705  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.308058  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:57.807911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.308881  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:58.808413  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.308020  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:36:59.808592  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.308911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:00.808175  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.307995  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:01.808695  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.308009  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:02.808771  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.308033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:03.808432  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.308848  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:04.807977  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.307980  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:05.808869  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.308433  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:06.808830  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.308901  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:07.808015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:07.808111  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:07.837945  481598 cri.go:89] found id: ""
	I1216 04:37:07.837959  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.837965  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:07.837970  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:07.838028  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:07.869351  481598 cri.go:89] found id: ""
	I1216 04:37:07.869366  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.869372  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:07.869377  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:07.869436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:07.907276  481598 cri.go:89] found id: ""
	I1216 04:37:07.907290  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.907297  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:07.907302  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:07.907360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:07.933358  481598 cri.go:89] found id: ""
	I1216 04:37:07.933373  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.933380  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:07.933385  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:07.933443  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:07.960678  481598 cri.go:89] found id: ""
	I1216 04:37:07.960692  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.960699  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:07.960704  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:07.960761  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:07.986399  481598 cri.go:89] found id: ""
	I1216 04:37:07.986414  481598 logs.go:282] 0 containers: []
	W1216 04:37:07.986421  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:07.986426  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:07.986483  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:08.015016  481598 cri.go:89] found id: ""
	I1216 04:37:08.015031  481598 logs.go:282] 0 containers: []
	W1216 04:37:08.015038  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:08.015046  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:08.015057  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:08.088739  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:08.088761  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:08.107036  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:08.107052  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:08.176727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:08.167962   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.168702   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.170464   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.171100   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:08.172772   11051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:08.176736  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:08.176749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:08.244460  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:08.244483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:10.772766  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:10.783210  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:10.783271  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:10.811358  481598 cri.go:89] found id: ""
	I1216 04:37:10.811374  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.811382  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:10.811388  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:10.811451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:10.841691  481598 cri.go:89] found id: ""
	I1216 04:37:10.841705  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.841712  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:10.841717  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:10.841792  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:10.869111  481598 cri.go:89] found id: ""
	I1216 04:37:10.869133  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.869141  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:10.869146  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:10.869227  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:10.897617  481598 cri.go:89] found id: ""
	I1216 04:37:10.897632  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.897640  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:10.897646  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:10.897709  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:10.924814  481598 cri.go:89] found id: ""
	I1216 04:37:10.924829  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.924838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:10.924849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:10.924909  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:10.951147  481598 cri.go:89] found id: ""
	I1216 04:37:10.951162  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.951170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:10.951181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:10.951240  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:10.977944  481598 cri.go:89] found id: ""
	I1216 04:37:10.977958  481598 logs.go:282] 0 containers: []
	W1216 04:37:10.977965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:10.977973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:10.977984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:11.046933  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:11.046953  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:11.062324  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:11.062340  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:11.128033  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:11.119557   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.119965   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.121750   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.122402   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:11.124048   11159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:11.128044  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:11.128055  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:11.195835  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:11.195855  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:13.729443  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:13.739852  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:13.739911  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:13.765288  481598 cri.go:89] found id: ""
	I1216 04:37:13.765303  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.765310  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:13.765315  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:13.765372  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:13.791619  481598 cri.go:89] found id: ""
	I1216 04:37:13.791634  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.791641  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:13.791646  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:13.791713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:13.829008  481598 cri.go:89] found id: ""
	I1216 04:37:13.829021  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.829028  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:13.829033  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:13.829115  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:13.860708  481598 cri.go:89] found id: ""
	I1216 04:37:13.860722  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.860729  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:13.860734  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:13.860795  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:13.890573  481598 cri.go:89] found id: ""
	I1216 04:37:13.890587  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.890594  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:13.890600  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:13.890659  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:13.921520  481598 cri.go:89] found id: ""
	I1216 04:37:13.921535  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.921543  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:13.921555  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:13.921616  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:13.950847  481598 cri.go:89] found id: ""
	I1216 04:37:13.950864  481598 logs.go:282] 0 containers: []
	W1216 04:37:13.950882  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:13.950890  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:13.950901  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:13.965697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:13.965713  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:14.040284  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:14.030948   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.031892   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.033714   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.034372   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:14.035987   11262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:14.040295  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:14.040307  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:14.114244  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:14.114266  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:14.146926  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:14.146942  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.715163  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:16.725607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:16.725688  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:16.751194  481598 cri.go:89] found id: ""
	I1216 04:37:16.751208  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.751215  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:16.751220  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:16.751277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:16.780407  481598 cri.go:89] found id: ""
	I1216 04:37:16.780421  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.780428  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:16.780433  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:16.780496  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:16.806409  481598 cri.go:89] found id: ""
	I1216 04:37:16.806424  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.806431  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:16.806436  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:16.806504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:16.838220  481598 cri.go:89] found id: ""
	I1216 04:37:16.838235  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.838242  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:16.838247  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:16.838306  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:16.866315  481598 cri.go:89] found id: ""
	I1216 04:37:16.866329  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.866336  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:16.866341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:16.866414  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:16.899090  481598 cri.go:89] found id: ""
	I1216 04:37:16.899105  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.899112  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:16.899117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:16.899178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:16.924588  481598 cri.go:89] found id: ""
	I1216 04:37:16.924603  481598 logs.go:282] 0 containers: []
	W1216 04:37:16.924611  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:16.924618  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:16.924630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:16.993464  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:16.993485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:17.009562  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:17.009582  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:17.075397  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:17.067506   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.068020   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069521   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.069902   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:17.071382   11371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:17.075408  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:17.075421  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:17.144979  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:17.145001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:19.675069  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:19.685090  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:19.685149  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:19.711697  481598 cri.go:89] found id: ""
	I1216 04:37:19.711712  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.711719  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:19.711724  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:19.711781  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:19.737017  481598 cri.go:89] found id: ""
	I1216 04:37:19.737031  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.737038  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:19.737043  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:19.737129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:19.764129  481598 cri.go:89] found id: ""
	I1216 04:37:19.764143  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.764150  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:19.764155  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:19.764210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:19.790063  481598 cri.go:89] found id: ""
	I1216 04:37:19.790077  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.790084  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:19.790098  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:19.790154  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:19.821689  481598 cri.go:89] found id: ""
	I1216 04:37:19.821703  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.821710  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:19.821716  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:19.821774  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:19.854088  481598 cri.go:89] found id: ""
	I1216 04:37:19.854103  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.854111  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:19.854116  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:19.854178  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:19.893475  481598 cri.go:89] found id: ""
	I1216 04:37:19.893496  481598 logs.go:282] 0 containers: []
	W1216 04:37:19.893505  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:19.893513  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:19.893524  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:19.961902  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:19.953918   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.954677   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956259   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.956573   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:19.957902   11473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:19.961916  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:19.961927  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:20.031206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:20.031233  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:20.062576  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:20.062596  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:20.132798  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:20.132818  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:22.649716  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:22.659636  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:22.659698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:22.684490  481598 cri.go:89] found id: ""
	I1216 04:37:22.684505  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.684512  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:22.684542  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:22.684599  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:22.709083  481598 cri.go:89] found id: ""
	I1216 04:37:22.709098  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.709105  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:22.709110  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:22.709165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:22.734473  481598 cri.go:89] found id: ""
	I1216 04:37:22.734487  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.734494  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:22.734499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:22.734557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:22.759459  481598 cri.go:89] found id: ""
	I1216 04:37:22.759473  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.759480  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:22.759485  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:22.759540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:22.784416  481598 cri.go:89] found id: ""
	I1216 04:37:22.784430  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.784437  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:22.784442  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:22.784508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:22.808823  481598 cri.go:89] found id: ""
	I1216 04:37:22.808837  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.808844  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:22.808849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:22.808906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:22.845939  481598 cri.go:89] found id: ""
	I1216 04:37:22.845965  481598 logs.go:282] 0 containers: []
	W1216 04:37:22.845973  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:22.845980  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:22.846001  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:22.939972  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:22.939998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:22.969984  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:22.970003  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:23.041537  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:23.041560  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:23.059445  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:23.059461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:23.127407  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:23.119122   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.119663   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121470   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.121806   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:23.123327   11598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.628052  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:25.638431  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:25.638504  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:25.665151  481598 cri.go:89] found id: ""
	I1216 04:37:25.665164  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.665172  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:25.665176  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:25.665249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:25.695604  481598 cri.go:89] found id: ""
	I1216 04:37:25.695617  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.695625  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:25.695630  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:25.695691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:25.720754  481598 cri.go:89] found id: ""
	I1216 04:37:25.720768  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.720775  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:25.720780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:25.720839  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:25.746771  481598 cri.go:89] found id: ""
	I1216 04:37:25.746785  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.746792  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:25.746797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:25.746857  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:25.776233  481598 cri.go:89] found id: ""
	I1216 04:37:25.776247  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.776264  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:25.776269  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:25.776342  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:25.803891  481598 cri.go:89] found id: ""
	I1216 04:37:25.803914  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.803922  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:25.803927  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:25.804021  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:25.845002  481598 cri.go:89] found id: ""
	I1216 04:37:25.845016  481598 logs.go:282] 0 containers: []
	W1216 04:37:25.845023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:25.845040  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:25.845053  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:25.921736  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:25.913341   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.914262   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.915800   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.916138   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:25.917723   11684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:25.921746  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:25.921757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:25.989735  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:25.989756  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:26.020992  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:26.021012  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:26.094837  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:26.094856  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:28.610236  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:28.620641  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:28.620702  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:28.648449  481598 cri.go:89] found id: ""
	I1216 04:37:28.648463  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.648470  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:28.648480  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:28.648539  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:28.675317  481598 cri.go:89] found id: ""
	I1216 04:37:28.675332  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.675339  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:28.675344  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:28.675402  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:28.700978  481598 cri.go:89] found id: ""
	I1216 04:37:28.700992  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.700998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:28.701003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:28.701104  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:28.726354  481598 cri.go:89] found id: ""
	I1216 04:37:28.726367  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.726374  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:28.726379  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:28.726436  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:28.752843  481598 cri.go:89] found id: ""
	I1216 04:37:28.752857  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.752864  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:28.752869  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:28.752927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:28.778190  481598 cri.go:89] found id: ""
	I1216 04:37:28.778205  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.778212  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:28.778217  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:28.778280  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:28.803029  481598 cri.go:89] found id: ""
	I1216 04:37:28.803044  481598 logs.go:282] 0 containers: []
	W1216 04:37:28.803051  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:28.803059  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:28.803070  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:28.896742  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:28.888260   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.888935   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890571   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.890932   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:28.892534   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:28.896763  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:28.896776  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:28.964206  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:28.964228  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:28.996487  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:28.996503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:29.063978  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:29.063998  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:31.580896  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:31.591181  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:31.591249  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:31.616263  481598 cri.go:89] found id: ""
	I1216 04:37:31.616277  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.616284  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:31.616289  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:31.616345  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:31.641685  481598 cri.go:89] found id: ""
	I1216 04:37:31.641700  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.641707  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:31.641712  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:31.641771  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:31.667472  481598 cri.go:89] found id: ""
	I1216 04:37:31.667487  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.667495  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:31.667500  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:31.667557  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:31.697212  481598 cri.go:89] found id: ""
	I1216 04:37:31.697241  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.697248  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:31.697253  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:31.697311  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:31.723185  481598 cri.go:89] found id: ""
	I1216 04:37:31.723199  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.723207  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:31.723212  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:31.723273  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:31.749934  481598 cri.go:89] found id: ""
	I1216 04:37:31.749957  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.749965  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:31.749970  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:31.750035  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:31.776884  481598 cri.go:89] found id: ""
	I1216 04:37:31.776905  481598 logs.go:282] 0 containers: []
	W1216 04:37:31.776911  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:31.776922  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:31.776933  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:31.856147  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:31.846171   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.847794   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.848402   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850247   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:31.850827   11888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:31.856168  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:31.856188  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:31.928187  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:31.928207  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:31.960005  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:31.960023  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:32.031454  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:32.031474  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.550103  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:34.560823  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:34.560882  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:34.587067  481598 cri.go:89] found id: ""
	I1216 04:37:34.587082  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.587092  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:34.587097  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:34.587160  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:34.613934  481598 cri.go:89] found id: ""
	I1216 04:37:34.613949  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.613956  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:34.613961  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:34.614018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:34.639997  481598 cri.go:89] found id: ""
	I1216 04:37:34.640011  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.640018  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:34.640023  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:34.640087  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:34.666140  481598 cri.go:89] found id: ""
	I1216 04:37:34.666154  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.666161  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:34.666166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:34.666226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:34.692116  481598 cri.go:89] found id: ""
	I1216 04:37:34.692131  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.692138  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:34.692143  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:34.692203  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:34.717134  481598 cri.go:89] found id: ""
	I1216 04:37:34.717148  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.717156  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:34.717161  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:34.717228  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:34.743931  481598 cri.go:89] found id: ""
	I1216 04:37:34.743946  481598 logs.go:282] 0 containers: []
	W1216 04:37:34.743963  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:34.743971  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:34.743983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:34.809826  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:34.809849  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:34.827619  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:34.827636  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:34.903666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:34.894237   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.895124   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.896898   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.897701   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:34.898407   12004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:34.903676  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:34.903686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:34.972944  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:34.972967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:37.507549  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:37.517802  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:37.517863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:37.543131  481598 cri.go:89] found id: ""
	I1216 04:37:37.543147  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.543155  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:37.543167  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:37.543224  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:37.568202  481598 cri.go:89] found id: ""
	I1216 04:37:37.568216  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.568223  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:37.568231  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:37.568288  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:37.593976  481598 cri.go:89] found id: ""
	I1216 04:37:37.593991  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.593998  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:37.594003  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:37.594066  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:37.619760  481598 cri.go:89] found id: ""
	I1216 04:37:37.619774  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.619781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:37.619787  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:37.619848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:37.644836  481598 cri.go:89] found id: ""
	I1216 04:37:37.644850  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.644857  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:37.644862  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:37.644921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:37.670454  481598 cri.go:89] found id: ""
	I1216 04:37:37.670468  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.670476  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:37.670481  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:37.670537  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:37.695742  481598 cri.go:89] found id: ""
	I1216 04:37:37.695762  481598 logs.go:282] 0 containers: []
	W1216 04:37:37.695769  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:37.695777  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:37.695787  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:37.759713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:37.759732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:37.774589  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:37.774606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:37.849933  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:37.841390   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.842110   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.843743   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.844252   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:37.845814   12103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:37.849945  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:37.849955  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:37.928468  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:37.928489  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.459800  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:40.470285  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:40.470349  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:40.499380  481598 cri.go:89] found id: ""
	I1216 04:37:40.499394  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.499401  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:40.499406  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:40.499464  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:40.528986  481598 cri.go:89] found id: ""
	I1216 04:37:40.529000  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.529007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:40.529012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:40.529089  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:40.555623  481598 cri.go:89] found id: ""
	I1216 04:37:40.555638  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.555646  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:40.555651  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:40.555708  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:40.581298  481598 cri.go:89] found id: ""
	I1216 04:37:40.581312  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.581319  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:40.581324  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:40.581382  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:40.611085  481598 cri.go:89] found id: ""
	I1216 04:37:40.611099  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.611106  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:40.611113  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:40.611173  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:40.636162  481598 cri.go:89] found id: ""
	I1216 04:37:40.636178  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.636185  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:40.636190  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:40.636250  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:40.664257  481598 cri.go:89] found id: ""
	I1216 04:37:40.664272  481598 logs.go:282] 0 containers: []
	W1216 04:37:40.664279  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:40.664287  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:40.664299  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:40.680011  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:40.680027  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:40.745907  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:40.737277   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.738066   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.739727   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.740303   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:40.741915   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:40.745919  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:40.745932  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:40.814715  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:40.814735  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:40.859159  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:40.859181  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.432718  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:43.443193  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:43.443264  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:43.469157  481598 cri.go:89] found id: ""
	I1216 04:37:43.469187  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.469195  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:43.469200  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:43.469323  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:43.494783  481598 cri.go:89] found id: ""
	I1216 04:37:43.494796  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.494804  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:43.494809  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:43.494869  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:43.521488  481598 cri.go:89] found id: ""
	I1216 04:37:43.521502  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.521509  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:43.521514  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:43.521573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:43.550707  481598 cri.go:89] found id: ""
	I1216 04:37:43.550721  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.550728  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:43.550733  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:43.550791  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:43.579977  481598 cri.go:89] found id: ""
	I1216 04:37:43.579991  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.579997  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:43.580002  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:43.580064  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:43.605041  481598 cri.go:89] found id: ""
	I1216 04:37:43.605056  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.605143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:43.605149  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:43.605208  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:43.631632  481598 cri.go:89] found id: ""
	I1216 04:37:43.631658  481598 logs.go:282] 0 containers: []
	W1216 04:37:43.631665  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:43.631672  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:43.631691  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:43.701085  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:43.701111  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:43.716379  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:43.716401  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:43.778569  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:43.770070   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.770734   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.772497   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.773037   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:43.774731   12315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:43.778594  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:43.778606  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:43.850663  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:43.850686  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.388473  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:46.398649  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:46.398713  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:46.425758  481598 cri.go:89] found id: ""
	I1216 04:37:46.425772  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.425780  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:46.425785  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:46.425843  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:46.453363  481598 cri.go:89] found id: ""
	I1216 04:37:46.453377  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.453384  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:46.453389  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:46.453450  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:46.479051  481598 cri.go:89] found id: ""
	I1216 04:37:46.479066  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.479074  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:46.479079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:46.479135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:46.509758  481598 cri.go:89] found id: ""
	I1216 04:37:46.509773  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.509781  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:46.509786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:46.509849  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:46.536775  481598 cri.go:89] found id: ""
	I1216 04:37:46.536788  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.536795  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:46.536801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:46.536870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:46.562238  481598 cri.go:89] found id: ""
	I1216 04:37:46.562253  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.562262  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:46.562268  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:46.562326  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:46.588577  481598 cri.go:89] found id: ""
	I1216 04:37:46.588591  481598 logs.go:282] 0 containers: []
	W1216 04:37:46.588598  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:46.588606  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:46.588617  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:46.658427  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:46.658447  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:46.692280  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:46.692304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:46.758854  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:46.758874  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:46.778062  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:46.778079  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:46.855875  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:46.846770   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.848177   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.849959   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.850258   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:46.851693   12433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.357557  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:49.367602  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:49.367665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:49.393022  481598 cri.go:89] found id: ""
	I1216 04:37:49.393037  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.393044  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:49.393049  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:49.393125  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:49.421701  481598 cri.go:89] found id: ""
	I1216 04:37:49.421716  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.421723  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:49.421728  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:49.421789  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:49.447139  481598 cri.go:89] found id: ""
	I1216 04:37:49.447154  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.447161  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:49.447166  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:49.447226  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:49.472003  481598 cri.go:89] found id: ""
	I1216 04:37:49.472018  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.472026  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:49.472032  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:49.472090  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:49.497762  481598 cri.go:89] found id: ""
	I1216 04:37:49.497782  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.497790  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:49.497794  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:49.497853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:49.527970  481598 cri.go:89] found id: ""
	I1216 04:37:49.527984  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.527992  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:49.527997  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:49.528055  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:49.554573  481598 cri.go:89] found id: ""
	I1216 04:37:49.554587  481598 logs.go:282] 0 containers: []
	W1216 04:37:49.554596  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:49.554604  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:49.554615  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:49.620959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:49.620979  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:49.636096  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:49.636115  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:49.705535  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:49.696916   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.697607   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699320   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.699896   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:49.701682   12527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:49.705545  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:49.705556  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:49.774081  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:49.774101  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.303119  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:52.313248  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:52.313317  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:52.339092  481598 cri.go:89] found id: ""
	I1216 04:37:52.339106  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.339113  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:52.339118  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:52.339181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:52.370928  481598 cri.go:89] found id: ""
	I1216 04:37:52.370942  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.370949  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:52.370954  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:52.371011  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:52.395986  481598 cri.go:89] found id: ""
	I1216 04:37:52.396000  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.396007  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:52.396012  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:52.396068  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:52.425010  481598 cri.go:89] found id: ""
	I1216 04:37:52.425024  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.425031  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:52.425036  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:52.425118  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:52.450781  481598 cri.go:89] found id: ""
	I1216 04:37:52.450796  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.450803  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:52.450808  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:52.450867  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:52.476589  481598 cri.go:89] found id: ""
	I1216 04:37:52.476603  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.476611  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:52.476617  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:52.476675  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:52.503929  481598 cri.go:89] found id: ""
	I1216 04:37:52.503944  481598 logs.go:282] 0 containers: []
	W1216 04:37:52.503951  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:52.503959  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:52.503970  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:52.519124  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:52.519149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:52.587049  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:52.577711   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.578577   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.580576   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.581341   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:52.583137   12629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:52.587060  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:52.587072  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:52.657393  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:52.657415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:52.686271  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:52.686289  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.258225  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:55.268276  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:55.268339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:55.295458  481598 cri.go:89] found id: ""
	I1216 04:37:55.295471  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.295479  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:55.295484  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:55.295550  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:55.322181  481598 cri.go:89] found id: ""
	I1216 04:37:55.322195  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.322202  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:55.322207  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:55.322315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:55.347301  481598 cri.go:89] found id: ""
	I1216 04:37:55.347316  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.347323  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:55.347329  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:55.347390  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:55.372973  481598 cri.go:89] found id: ""
	I1216 04:37:55.372988  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.372995  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:55.373000  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:55.373057  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:55.398159  481598 cri.go:89] found id: ""
	I1216 04:37:55.398173  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.398179  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:55.398184  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:55.398245  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:55.423108  481598 cri.go:89] found id: ""
	I1216 04:37:55.423122  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.423128  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:55.423133  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:55.423198  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:55.449345  481598 cri.go:89] found id: ""
	I1216 04:37:55.449360  481598 logs.go:282] 0 containers: []
	W1216 04:37:55.449367  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:55.449375  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:55.449397  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:55.514641  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:55.514662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:55.529353  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:55.529369  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:55.598810  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:55.589643   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.590588   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.591554   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593248   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:55.593891   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:55.598830  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:55.598842  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:55.666947  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:55.666967  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:37:58.197584  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:37:58.208946  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:37:58.209018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:37:58.234805  481598 cri.go:89] found id: ""
	I1216 04:37:58.234819  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.234826  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:37:58.234831  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:37:58.234886  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:37:58.259158  481598 cri.go:89] found id: ""
	I1216 04:37:58.259171  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.259178  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:37:58.259183  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:37:58.259241  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:37:58.286151  481598 cri.go:89] found id: ""
	I1216 04:37:58.286165  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.286172  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:37:58.286177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:37:58.286234  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:37:58.310737  481598 cri.go:89] found id: ""
	I1216 04:37:58.310750  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.310757  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:37:58.310762  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:37:58.310817  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:37:58.334963  481598 cri.go:89] found id: ""
	I1216 04:37:58.334978  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.334985  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:37:58.334989  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:37:58.335054  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:37:58.363884  481598 cri.go:89] found id: ""
	I1216 04:37:58.363910  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.363918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:37:58.363924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:37:58.363992  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:37:58.387948  481598 cri.go:89] found id: ""
	I1216 04:37:58.387961  481598 logs.go:282] 0 containers: []
	W1216 04:37:58.387968  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:37:58.387977  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:37:58.387988  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:37:58.452873  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:37:58.452892  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:37:58.468670  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:37:58.468688  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:37:58.537376  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:37:58.528562   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.529202   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.530985   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.531559   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:37:58.533122   12836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:37:58.537385  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:37:58.537396  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:37:58.606317  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:37:58.606339  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:01.135427  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:01.146890  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:01.146955  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:01.174260  481598 cri.go:89] found id: ""
	I1216 04:38:01.174275  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.174282  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:01.174287  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:01.174347  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:01.199944  481598 cri.go:89] found id: ""
	I1216 04:38:01.199958  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.199965  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:01.199970  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:01.200033  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:01.228798  481598 cri.go:89] found id: ""
	I1216 04:38:01.228814  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.228820  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:01.228825  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:01.228884  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:01.255775  481598 cri.go:89] found id: ""
	I1216 04:38:01.255789  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.255796  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:01.255801  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:01.255860  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:01.281657  481598 cri.go:89] found id: ""
	I1216 04:38:01.281671  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.281678  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:01.281683  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:01.281742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:01.307766  481598 cri.go:89] found id: ""
	I1216 04:38:01.307779  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.307786  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:01.307791  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:01.307851  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:01.333581  481598 cri.go:89] found id: ""
	I1216 04:38:01.333595  481598 logs.go:282] 0 containers: []
	W1216 04:38:01.333602  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:01.333610  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:01.333621  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:01.399337  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:01.399356  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:01.414266  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:01.414283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:01.482637  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:01.474533   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.475363   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.476875   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.477409   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:01.478874   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:01.482650  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:01.482662  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:01.550883  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:01.550905  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.081199  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:04.093060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:04.093177  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:04.125499  481598 cri.go:89] found id: ""
	I1216 04:38:04.125513  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.125521  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:04.125526  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:04.125595  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:04.151973  481598 cri.go:89] found id: ""
	I1216 04:38:04.151987  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.151994  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:04.151999  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:04.152058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:04.180246  481598 cri.go:89] found id: ""
	I1216 04:38:04.180260  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.180266  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:04.180271  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:04.180328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:04.207652  481598 cri.go:89] found id: ""
	I1216 04:38:04.207665  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.207672  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:04.207678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:04.207735  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:04.233457  481598 cri.go:89] found id: ""
	I1216 04:38:04.233470  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.233477  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:04.233483  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:04.233540  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:04.259854  481598 cri.go:89] found id: ""
	I1216 04:38:04.259868  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.259875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:04.259880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:04.259941  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:04.285804  481598 cri.go:89] found id: ""
	I1216 04:38:04.285818  481598 logs.go:282] 0 containers: []
	W1216 04:38:04.285825  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:04.285832  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:04.285843  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:04.364313  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:04.364343  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:04.397537  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:04.397559  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:04.466334  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:04.466358  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:04.481695  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:04.481712  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:04.549601  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:04.541286   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.542136   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.543652   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.544110   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:04.545613   13055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:07.049858  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:07.060224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:07.060286  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:07.095538  481598 cri.go:89] found id: ""
	I1216 04:38:07.095552  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.095558  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:07.095572  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:07.095630  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:07.134098  481598 cri.go:89] found id: ""
	I1216 04:38:07.134113  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.134120  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:07.134125  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:07.134181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:07.160282  481598 cri.go:89] found id: ""
	I1216 04:38:07.160296  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.160312  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:07.160317  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:07.160375  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:07.186194  481598 cri.go:89] found id: ""
	I1216 04:38:07.186208  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.186215  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:07.186220  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:07.186277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:07.211185  481598 cri.go:89] found id: ""
	I1216 04:38:07.211198  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.211211  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:07.211216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:07.211274  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:07.236131  481598 cri.go:89] found id: ""
	I1216 04:38:07.236145  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.236171  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:07.236177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:07.236243  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:07.262438  481598 cri.go:89] found id: ""
	I1216 04:38:07.262452  481598 logs.go:282] 0 containers: []
	W1216 04:38:07.262459  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:07.262467  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:07.262477  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:07.331225  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:07.331246  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:07.359219  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:07.359236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:07.426207  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:07.426225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:07.441345  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:07.441364  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:07.509422  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:07.501041   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.501780   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503380   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.503873   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:07.505492   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.011147  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:10.023261  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:10.023327  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:10.050971  481598 cri.go:89] found id: ""
	I1216 04:38:10.050986  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.050994  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:10.050999  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:10.051073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:10.085339  481598 cri.go:89] found id: ""
	I1216 04:38:10.085353  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.085360  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:10.085366  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:10.085434  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:10.124529  481598 cri.go:89] found id: ""
	I1216 04:38:10.124543  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.124551  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:10.124556  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:10.124624  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:10.164418  481598 cri.go:89] found id: ""
	I1216 04:38:10.164434  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.164442  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:10.164448  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:10.164517  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:10.190732  481598 cri.go:89] found id: ""
	I1216 04:38:10.190746  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.190753  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:10.190758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:10.190815  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:10.216314  481598 cri.go:89] found id: ""
	I1216 04:38:10.216339  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.216346  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:10.216352  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:10.216419  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:10.241726  481598 cri.go:89] found id: ""
	I1216 04:38:10.241747  481598 logs.go:282] 0 containers: []
	W1216 04:38:10.241755  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:10.241768  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:10.241780  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:10.314496  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:10.304987   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.305903   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.306681   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.308501   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:10.309133   13246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:10.314506  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:10.314520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:10.383929  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:10.383952  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:10.414686  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:10.414703  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:10.480296  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:10.480315  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:12.997386  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:13.013029  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:13.013152  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:13.043756  481598 cri.go:89] found id: ""
	I1216 04:38:13.043772  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.043779  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:13.043784  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:13.043841  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:13.078538  481598 cri.go:89] found id: ""
	I1216 04:38:13.078552  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.078559  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:13.078564  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:13.078625  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:13.107509  481598 cri.go:89] found id: ""
	I1216 04:38:13.107523  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.107530  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:13.107535  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:13.107590  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:13.144886  481598 cri.go:89] found id: ""
	I1216 04:38:13.144900  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.144907  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:13.144912  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:13.144967  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:13.172261  481598 cri.go:89] found id: ""
	I1216 04:38:13.172275  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.172282  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:13.172287  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:13.172346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:13.200255  481598 cri.go:89] found id: ""
	I1216 04:38:13.200270  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.200277  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:13.200282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:13.200339  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:13.231840  481598 cri.go:89] found id: ""
	I1216 04:38:13.231855  481598 logs.go:282] 0 containers: []
	W1216 04:38:13.231864  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:13.231871  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:13.231882  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:13.305140  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:13.305162  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:13.320119  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:13.320135  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:13.384652  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:13.376630   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.377445   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.378990   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.379381   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:13.380897   13357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:13.384662  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:13.384672  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:13.452891  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:13.452913  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:15.986467  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:15.996642  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:15.996705  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:16.023730  481598 cri.go:89] found id: ""
	I1216 04:38:16.023745  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.023752  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:16.023757  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:16.023814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:16.048187  481598 cri.go:89] found id: ""
	I1216 04:38:16.048202  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.048209  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:16.048214  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:16.048270  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:16.084197  481598 cri.go:89] found id: ""
	I1216 04:38:16.084210  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.084217  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:16.084222  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:16.084279  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:16.114000  481598 cri.go:89] found id: ""
	I1216 04:38:16.114014  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.114021  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:16.114026  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:16.114095  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:16.146003  481598 cri.go:89] found id: ""
	I1216 04:38:16.146016  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.146023  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:16.146028  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:16.146085  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:16.171053  481598 cri.go:89] found id: ""
	I1216 04:38:16.171067  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.171074  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:16.171079  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:16.171146  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:16.195607  481598 cri.go:89] found id: ""
	I1216 04:38:16.195621  481598 logs.go:282] 0 containers: []
	W1216 04:38:16.195629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:16.195637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:16.195647  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:16.261510  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:16.261531  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:16.276956  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:16.276972  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:16.337904  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:16.329776   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.330345   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.331568   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.332130   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:16.333841   13462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:16.337914  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:16.337925  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:16.407434  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:16.407456  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:18.938513  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:18.948612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:18.948671  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:18.973989  481598 cri.go:89] found id: ""
	I1216 04:38:18.974004  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.974011  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:18.974016  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:18.974076  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:18.999416  481598 cri.go:89] found id: ""
	I1216 04:38:18.999430  481598 logs.go:282] 0 containers: []
	W1216 04:38:18.999437  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:18.999442  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:18.999499  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:19.036420  481598 cri.go:89] found id: ""
	I1216 04:38:19.036433  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.036440  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:19.036444  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:19.036500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:19.063584  481598 cri.go:89] found id: ""
	I1216 04:38:19.063600  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.063617  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:19.063623  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:19.063694  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:19.099252  481598 cri.go:89] found id: ""
	I1216 04:38:19.099275  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.099283  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:19.099289  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:19.099363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:19.126285  481598 cri.go:89] found id: ""
	I1216 04:38:19.126307  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.126315  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:19.126320  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:19.126387  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:19.151707  481598 cri.go:89] found id: ""
	I1216 04:38:19.151722  481598 logs.go:282] 0 containers: []
	W1216 04:38:19.151738  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:19.151746  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:19.151757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:19.216698  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:19.216723  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:19.231764  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:19.231783  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:19.299324  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:19.291049   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.291658   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293310   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.293836   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:19.295297   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:19.299334  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:19.299344  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:19.368556  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:19.368580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:21.906105  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:21.916147  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:21.916206  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:21.941307  481598 cri.go:89] found id: ""
	I1216 04:38:21.941321  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.941328  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:21.941333  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:21.941399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:21.966745  481598 cri.go:89] found id: ""
	I1216 04:38:21.966760  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.966767  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:21.966772  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:21.966831  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:21.996091  481598 cri.go:89] found id: ""
	I1216 04:38:21.996106  481598 logs.go:282] 0 containers: []
	W1216 04:38:21.996113  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:21.996117  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:21.996176  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:22.022731  481598 cri.go:89] found id: ""
	I1216 04:38:22.022746  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.022753  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:22.022758  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:22.022820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:22.055034  481598 cri.go:89] found id: ""
	I1216 04:38:22.055048  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.055067  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:22.055072  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:22.055136  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:22.106853  481598 cri.go:89] found id: ""
	I1216 04:38:22.106868  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.106875  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:22.106880  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:22.106949  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:22.143371  481598 cri.go:89] found id: ""
	I1216 04:38:22.143385  481598 logs.go:282] 0 containers: []
	W1216 04:38:22.143392  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:22.143399  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:22.143410  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:22.209056  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:22.200890   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.201492   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203157   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.203493   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:22.204997   13661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:22.209083  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:22.209096  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:22.276728  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:22.276748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:22.308467  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:22.308483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:22.373121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:22.373141  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:24.888068  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:24.898375  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:24.898438  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:24.922926  481598 cri.go:89] found id: ""
	I1216 04:38:24.922940  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.922953  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:24.922958  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:24.923018  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:24.948274  481598 cri.go:89] found id: ""
	I1216 04:38:24.948288  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.948296  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:24.948300  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:24.948366  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:24.973866  481598 cri.go:89] found id: ""
	I1216 04:38:24.973880  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.973888  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:24.973893  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:24.973950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:24.999743  481598 cri.go:89] found id: ""
	I1216 04:38:24.999757  481598 logs.go:282] 0 containers: []
	W1216 04:38:24.999764  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:24.999769  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:24.999827  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:25.030266  481598 cri.go:89] found id: ""
	I1216 04:38:25.030280  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.030298  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:25.030303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:25.030363  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:25.055976  481598 cri.go:89] found id: ""
	I1216 04:38:25.055991  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.056008  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:25.056014  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:25.056070  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:25.096522  481598 cri.go:89] found id: ""
	I1216 04:38:25.096537  481598 logs.go:282] 0 containers: []
	W1216 04:38:25.096553  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:25.096568  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:25.096580  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:25.171632  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:25.162141   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.162937   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.164740   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.165464   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:25.166973   13764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:25.171649  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:25.171661  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:25.239309  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:25.239330  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:25.268791  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:25.268807  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:25.345864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:25.345887  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:27.863617  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:27.874797  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:27.874872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:27.904044  481598 cri.go:89] found id: ""
	I1216 04:38:27.904057  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.904064  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:27.904070  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:27.904135  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:27.930157  481598 cri.go:89] found id: ""
	I1216 04:38:27.930172  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.930179  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:27.930184  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:27.930248  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:27.960176  481598 cri.go:89] found id: ""
	I1216 04:38:27.960203  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.960211  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:27.960216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:27.960287  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:27.986202  481598 cri.go:89] found id: ""
	I1216 04:38:27.986215  481598 logs.go:282] 0 containers: []
	W1216 04:38:27.986222  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:27.986227  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:27.986284  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:28.017804  481598 cri.go:89] found id: ""
	I1216 04:38:28.017818  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.017825  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:28.017830  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:28.017899  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:28.048381  481598 cri.go:89] found id: ""
	I1216 04:38:28.048397  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.048404  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:28.048410  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:28.048469  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:28.089010  481598 cri.go:89] found id: ""
	I1216 04:38:28.089024  481598 logs.go:282] 0 containers: []
	W1216 04:38:28.089032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:28.089040  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:28.089051  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:28.107163  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:28.107185  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:28.185125  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:28.176718   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.177346   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179024   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.179600   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:28.181158   13877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:28.185136  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:28.185146  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:28.253973  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:28.253993  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:28.284589  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:28.284611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:30.850377  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:30.860658  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:30.860717  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:30.885504  481598 cri.go:89] found id: ""
	I1216 04:38:30.885519  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.885526  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:30.885531  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:30.885592  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:30.910273  481598 cri.go:89] found id: ""
	I1216 04:38:30.910287  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.910294  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:30.910299  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:30.910360  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:30.935120  481598 cri.go:89] found id: ""
	I1216 04:38:30.935134  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.935140  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:30.935145  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:30.935200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:30.960866  481598 cri.go:89] found id: ""
	I1216 04:38:30.960879  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.960886  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:30.960891  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:30.960947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:30.986279  481598 cri.go:89] found id: ""
	I1216 04:38:30.986294  481598 logs.go:282] 0 containers: []
	W1216 04:38:30.986302  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:30.986306  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:30.986367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:31.014463  481598 cri.go:89] found id: ""
	I1216 04:38:31.014486  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.014493  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:31.014499  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:31.014561  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:31.041177  481598 cri.go:89] found id: ""
	I1216 04:38:31.041198  481598 logs.go:282] 0 containers: []
	W1216 04:38:31.041205  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:31.041213  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:31.041248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:31.083930  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:31.083946  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:31.155612  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:31.155632  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:31.171599  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:31.171616  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:31.238570  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:31.230375   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.231355   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.232487   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.233079   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:31.234687   13996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:31.238580  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:31.238590  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:33.806752  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:33.816682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:33.816748  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:33.841422  481598 cri.go:89] found id: ""
	I1216 04:38:33.841437  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.841444  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:33.841449  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:33.841508  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:33.866870  481598 cri.go:89] found id: ""
	I1216 04:38:33.866884  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.866891  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:33.866896  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:33.866954  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:33.892338  481598 cri.go:89] found id: ""
	I1216 04:38:33.892352  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.892360  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:33.892365  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:33.892428  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:33.920004  481598 cri.go:89] found id: ""
	I1216 04:38:33.920018  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.920025  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:33.920030  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:33.920088  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:33.950159  481598 cri.go:89] found id: ""
	I1216 04:38:33.950173  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.950180  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:33.950185  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:33.950244  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:33.976065  481598 cri.go:89] found id: ""
	I1216 04:38:33.976079  481598 logs.go:282] 0 containers: []
	W1216 04:38:33.976086  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:33.976092  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:33.976172  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:34.001694  481598 cri.go:89] found id: ""
	I1216 04:38:34.001710  481598 logs.go:282] 0 containers: []
	W1216 04:38:34.001721  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:34.001729  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:34.001741  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:34.041633  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:34.041651  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:34.108611  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:34.108630  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:34.125509  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:34.125525  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:34.196710  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:34.188193   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.189247   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191038   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.191344   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:34.192807   14102 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:34.196735  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:34.196746  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:36.764814  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:36.774892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:36.774950  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:36.800624  481598 cri.go:89] found id: ""
	I1216 04:38:36.800640  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.800647  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:36.800652  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:36.800715  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:36.826259  481598 cri.go:89] found id: ""
	I1216 04:38:36.826274  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.826281  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:36.826286  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:36.826343  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:36.852246  481598 cri.go:89] found id: ""
	I1216 04:38:36.852269  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.852277  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:36.852282  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:36.852351  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:36.877659  481598 cri.go:89] found id: ""
	I1216 04:38:36.877680  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.877688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:36.877693  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:36.877752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:36.903365  481598 cri.go:89] found id: ""
	I1216 04:38:36.903379  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.903385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:36.903390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:36.903446  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:36.928313  481598 cri.go:89] found id: ""
	I1216 04:38:36.928328  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.928335  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:36.928341  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:36.928399  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:36.953145  481598 cri.go:89] found id: ""
	I1216 04:38:36.953158  481598 logs.go:282] 0 containers: []
	W1216 04:38:36.953165  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:36.953172  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:36.953182  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:37.018934  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:37.018956  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:37.036483  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:37.036500  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:37.114492  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:37.106457   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.106872   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108430   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.108750   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:37.110247   14191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:37.114503  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:37.114514  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:37.191646  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:37.191667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:39.722033  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:39.731793  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:39.731852  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:39.756808  481598 cri.go:89] found id: ""
	I1216 04:38:39.756822  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.756829  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:39.756834  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:39.756891  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:39.782419  481598 cri.go:89] found id: ""
	I1216 04:38:39.782440  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.782448  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:39.782453  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:39.782510  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:39.807545  481598 cri.go:89] found id: ""
	I1216 04:38:39.807559  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.807576  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:39.807581  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:39.807639  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:39.836801  481598 cri.go:89] found id: ""
	I1216 04:38:39.836816  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.836832  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:39.836844  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:39.836914  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:39.861851  481598 cri.go:89] found id: ""
	I1216 04:38:39.861865  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.861872  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:39.861877  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:39.861935  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:39.891116  481598 cri.go:89] found id: ""
	I1216 04:38:39.891130  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.891137  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:39.891144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:39.891200  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:39.917011  481598 cri.go:89] found id: ""
	I1216 04:38:39.917026  481598 logs.go:282] 0 containers: []
	W1216 04:38:39.917032  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:39.917040  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:39.917050  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:39.983103  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:39.983124  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:39.997812  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:39.997829  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:40.072880  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:40.062419   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.063322   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066458   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.066896   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:40.068451   14297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:40.072890  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:40.072902  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:40.155262  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:40.155284  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:42.686177  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:42.696709  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:42.696766  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:42.723669  481598 cri.go:89] found id: ""
	I1216 04:38:42.723684  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.723691  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:42.723697  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:42.723762  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:42.751573  481598 cri.go:89] found id: ""
	I1216 04:38:42.751587  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.751594  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:42.751599  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:42.751660  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:42.777155  481598 cri.go:89] found id: ""
	I1216 04:38:42.777170  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.777177  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:42.777182  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:42.777253  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:42.802762  481598 cri.go:89] found id: ""
	I1216 04:38:42.802776  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.802783  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:42.802788  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:42.802847  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:42.828278  481598 cri.go:89] found id: ""
	I1216 04:38:42.828291  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.828299  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:42.828303  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:42.828361  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:42.854186  481598 cri.go:89] found id: ""
	I1216 04:38:42.854211  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.854219  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:42.854224  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:42.854281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:42.879809  481598 cri.go:89] found id: ""
	I1216 04:38:42.879822  481598 logs.go:282] 0 containers: []
	W1216 04:38:42.879831  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:42.879839  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:42.879851  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:42.945305  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:42.935474   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.936560   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.937507   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939318   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:42.939948   14399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:42.945315  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:42.945326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:43.019176  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:43.019199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:43.048232  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:43.048248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:43.128355  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:43.128376  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.644135  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:45.654691  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:45.654750  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:45.682129  481598 cri.go:89] found id: ""
	I1216 04:38:45.682143  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.682151  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:45.682156  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:45.682216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:45.706956  481598 cri.go:89] found id: ""
	I1216 04:38:45.706970  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.706977  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:45.706981  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:45.707040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:45.732479  481598 cri.go:89] found id: ""
	I1216 04:38:45.732493  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.732500  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:45.732505  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:45.732563  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:45.757526  481598 cri.go:89] found id: ""
	I1216 04:38:45.757540  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.757547  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:45.757553  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:45.757610  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:45.787392  481598 cri.go:89] found id: ""
	I1216 04:38:45.787407  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.787414  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:45.787419  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:45.787481  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:45.817452  481598 cri.go:89] found id: ""
	I1216 04:38:45.817477  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.817484  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:45.817490  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:45.817549  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:45.843705  481598 cri.go:89] found id: ""
	I1216 04:38:45.843732  481598 logs.go:282] 0 containers: []
	W1216 04:38:45.843744  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:45.843752  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:45.843762  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:45.909394  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:45.909415  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:45.924650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:45.924667  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:45.985242  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:45.976918   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.977461   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.978500   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980038   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:45.980480   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:45.985251  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:45.985262  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:46.060306  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:46.060333  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.603994  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:48.614177  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:48.614238  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:48.639598  481598 cri.go:89] found id: ""
	I1216 04:38:48.639612  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.639620  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:48.639625  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:48.639685  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:48.668445  481598 cri.go:89] found id: ""
	I1216 04:38:48.668458  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.668465  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:48.668470  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:48.668525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:48.698321  481598 cri.go:89] found id: ""
	I1216 04:38:48.698336  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.698343  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:48.698348  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:48.698410  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:48.724272  481598 cri.go:89] found id: ""
	I1216 04:38:48.724286  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.724293  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:48.724298  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:48.724367  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:48.748881  481598 cri.go:89] found id: ""
	I1216 04:38:48.748895  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.748902  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:48.748907  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:48.748965  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:48.773436  481598 cri.go:89] found id: ""
	I1216 04:38:48.773450  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.773456  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:48.773462  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:48.773518  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:48.798866  481598 cri.go:89] found id: ""
	I1216 04:38:48.798880  481598 logs.go:282] 0 containers: []
	W1216 04:38:48.798887  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:48.798894  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:48.798904  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:48.830890  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:48.830906  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:48.897158  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:48.897179  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:48.912309  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:48.912326  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:48.979282  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:48.970954   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.971966   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.972659   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974127   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:48.974422   14626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:48.979293  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:48.979304  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:51.548916  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:51.559621  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:51.559691  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:51.585186  481598 cri.go:89] found id: ""
	I1216 04:38:51.585201  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.585208  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:51.585214  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:51.585281  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:51.610438  481598 cri.go:89] found id: ""
	I1216 04:38:51.610454  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.610462  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:51.610466  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:51.610523  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:51.640579  481598 cri.go:89] found id: ""
	I1216 04:38:51.640594  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.640601  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:51.640607  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:51.640665  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:51.667755  481598 cri.go:89] found id: ""
	I1216 04:38:51.667770  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.667778  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:51.667783  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:51.667840  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:51.697652  481598 cri.go:89] found id: ""
	I1216 04:38:51.697666  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.697673  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:51.697678  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:51.697738  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:51.723173  481598 cri.go:89] found id: ""
	I1216 04:38:51.723188  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.723195  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:51.723200  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:51.723266  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:51.748836  481598 cri.go:89] found id: ""
	I1216 04:38:51.748851  481598 logs.go:282] 0 containers: []
	W1216 04:38:51.748858  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:51.748865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:51.748876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:51.790045  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:51.790061  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:51.857688  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:51.857707  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:51.872771  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:51.872788  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:51.934401  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:51.926211   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.926966   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.928583   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.929148   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:51.930598   14729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:51.934410  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:51.934420  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:54.502288  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:54.513093  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:54.513158  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:54.540102  481598 cri.go:89] found id: ""
	I1216 04:38:54.540116  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.540124  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:54.540129  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:54.540187  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:54.565581  481598 cri.go:89] found id: ""
	I1216 04:38:54.565597  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.565605  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:54.565609  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:54.565673  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:54.594141  481598 cri.go:89] found id: ""
	I1216 04:38:54.594155  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.594163  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:54.594167  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:54.594229  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:54.620437  481598 cri.go:89] found id: ""
	I1216 04:38:54.620451  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.620459  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:54.620464  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:54.620521  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:54.651777  481598 cri.go:89] found id: ""
	I1216 04:38:54.651792  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.651800  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:54.651805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:54.651862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:54.677522  481598 cri.go:89] found id: ""
	I1216 04:38:54.677536  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.677544  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:54.677549  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:54.677608  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:54.702758  481598 cri.go:89] found id: ""
	I1216 04:38:54.702774  481598 logs.go:282] 0 containers: []
	W1216 04:38:54.702782  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:54.702789  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:54.702800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:54.731468  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:54.731485  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:54.801713  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:54.801732  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:54.816784  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:54.816800  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:54.890418  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:54.882935   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.883563   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.884583   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.885055   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:54.886522   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:54.890428  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:54.890439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:38:57.462843  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:38:57.473005  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:38:57.473096  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:38:57.498656  481598 cri.go:89] found id: ""
	I1216 04:38:57.498670  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.498676  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:38:57.498682  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:38:57.498740  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:38:57.524589  481598 cri.go:89] found id: ""
	I1216 04:38:57.524604  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.524611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:38:57.524616  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:38:57.524683  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:38:57.549819  481598 cri.go:89] found id: ""
	I1216 04:38:57.549833  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.549844  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:38:57.549849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:38:57.549906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:38:57.580220  481598 cri.go:89] found id: ""
	I1216 04:38:57.580234  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.580241  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:38:57.580246  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:38:57.580303  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:38:57.605587  481598 cri.go:89] found id: ""
	I1216 04:38:57.605600  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.605607  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:38:57.605612  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:38:57.605668  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:38:57.630691  481598 cri.go:89] found id: ""
	I1216 04:38:57.630706  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.630721  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:38:57.630726  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:38:57.630784  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:38:57.655557  481598 cri.go:89] found id: ""
	I1216 04:38:57.655571  481598 logs.go:282] 0 containers: []
	W1216 04:38:57.655579  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:38:57.655588  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:38:57.655598  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:38:57.686872  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:38:57.686888  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:38:57.752402  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:38:57.752422  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:38:57.767423  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:38:57.767439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:38:57.831611  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:38:57.823549   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.824364   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826016   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.826308   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:38:57.827809   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:38:57.831621  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:38:57.831631  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.403298  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:00.416780  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:00.416848  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:00.445602  481598 cri.go:89] found id: ""
	I1216 04:39:00.445618  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.445626  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:00.445632  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:00.445698  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:00.480454  481598 cri.go:89] found id: ""
	I1216 04:39:00.480470  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.480478  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:00.480483  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:00.480548  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:00.509654  481598 cri.go:89] found id: ""
	I1216 04:39:00.509669  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.509677  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:00.509682  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:00.509746  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:00.539666  481598 cri.go:89] found id: ""
	I1216 04:39:00.539681  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.539688  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:00.539694  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:00.539755  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:00.567301  481598 cri.go:89] found id: ""
	I1216 04:39:00.567316  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.567323  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:00.567328  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:00.567388  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:00.593431  481598 cri.go:89] found id: ""
	I1216 04:39:00.593446  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.593453  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:00.593458  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:00.593526  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:00.618713  481598 cri.go:89] found id: ""
	I1216 04:39:00.618728  481598 logs.go:282] 0 containers: []
	W1216 04:39:00.618736  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:00.618743  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:00.618754  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:00.687858  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:00.678533   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.679159   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.681526   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.682358   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:00.683768   15027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:00.687869  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:00.687880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:00.757046  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:00.757071  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:00.784949  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:00.784966  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:00.850312  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:00.850331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.365582  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:03.376104  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:03.376164  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:03.402517  481598 cri.go:89] found id: ""
	I1216 04:39:03.402532  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.402539  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:03.402544  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:03.402605  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:03.428281  481598 cri.go:89] found id: ""
	I1216 04:39:03.428295  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.428302  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:03.428308  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:03.428365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:03.456250  481598 cri.go:89] found id: ""
	I1216 04:39:03.456267  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.456274  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:03.456280  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:03.456353  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:03.482051  481598 cri.go:89] found id: ""
	I1216 04:39:03.482064  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.482071  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:03.482077  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:03.482137  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:03.511578  481598 cri.go:89] found id: ""
	I1216 04:39:03.511594  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.511601  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:03.511606  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:03.511664  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:03.540839  481598 cri.go:89] found id: ""
	I1216 04:39:03.540853  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.540860  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:03.540866  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:03.540921  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:03.567087  481598 cri.go:89] found id: ""
	I1216 04:39:03.567103  481598 logs.go:282] 0 containers: []
	W1216 04:39:03.567111  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:03.567119  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:03.567131  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:03.633316  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:03.633338  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:03.648697  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:03.648714  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:03.714118  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:03.704846   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.705914   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.707686   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.708281   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:03.710068   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:03.714128  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:03.714140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:03.784197  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:03.784219  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.317384  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:06.328685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:06.328743  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:06.355872  481598 cri.go:89] found id: ""
	I1216 04:39:06.355887  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.355893  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:06.355907  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:06.355964  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:06.386605  481598 cri.go:89] found id: ""
	I1216 04:39:06.386619  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.386626  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:06.386631  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:06.386696  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:06.412102  481598 cri.go:89] found id: ""
	I1216 04:39:06.412117  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.412132  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:06.412137  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:06.412209  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:06.437654  481598 cri.go:89] found id: ""
	I1216 04:39:06.437669  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.437676  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:06.437681  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:06.437752  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:06.466130  481598 cri.go:89] found id: ""
	I1216 04:39:06.466145  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.466151  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:06.466156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:06.466219  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:06.491149  481598 cri.go:89] found id: ""
	I1216 04:39:06.491163  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.491170  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:06.491176  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:06.491236  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:06.517113  481598 cri.go:89] found id: ""
	I1216 04:39:06.517127  481598 logs.go:282] 0 containers: []
	W1216 04:39:06.517134  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:06.517141  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:06.517165  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:06.532219  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:06.532236  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:06.610459  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:06.601795   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603005   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.603804   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605522   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:06.605849   15245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:06.610469  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:06.610480  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:06.678489  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:06.678509  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:06.713694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:06.713710  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.281978  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:09.291972  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:09.292040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:09.318986  481598 cri.go:89] found id: ""
	I1216 04:39:09.319002  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.319009  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:09.319014  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:09.319080  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:09.355810  481598 cri.go:89] found id: ""
	I1216 04:39:09.355823  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.355848  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:09.355853  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:09.355917  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:09.386910  481598 cri.go:89] found id: ""
	I1216 04:39:09.386939  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.386946  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:09.386951  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:09.387019  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:09.415820  481598 cri.go:89] found id: ""
	I1216 04:39:09.415834  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.415841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:09.415846  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:09.415902  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:09.441866  481598 cri.go:89] found id: ""
	I1216 04:39:09.441881  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.441888  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:09.441892  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:09.441956  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:09.467703  481598 cri.go:89] found id: ""
	I1216 04:39:09.467718  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.467724  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:09.467730  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:09.467790  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:09.494307  481598 cri.go:89] found id: ""
	I1216 04:39:09.494322  481598 logs.go:282] 0 containers: []
	W1216 04:39:09.494329  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:09.494336  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:09.494346  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:09.521531  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:09.521549  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:09.587441  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:09.587464  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:09.602275  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:09.602291  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:09.664727  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:09.657029   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.657494   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659008   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.659326   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:09.660782   15365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:09.664737  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:09.664748  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.233947  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:12.245865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:12.245923  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:12.270410  481598 cri.go:89] found id: ""
	I1216 04:39:12.270425  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.270431  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:12.270437  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:12.270513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:12.295309  481598 cri.go:89] found id: ""
	I1216 04:39:12.295323  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.295330  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:12.295334  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:12.295391  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:12.326327  481598 cri.go:89] found id: ""
	I1216 04:39:12.326342  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.326349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:12.326354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:12.326415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:12.358181  481598 cri.go:89] found id: ""
	I1216 04:39:12.358196  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.358203  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:12.358208  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:12.358309  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:12.390281  481598 cri.go:89] found id: ""
	I1216 04:39:12.390296  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.390303  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:12.390308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:12.390365  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:12.419429  481598 cri.go:89] found id: ""
	I1216 04:39:12.419444  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.419451  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:12.419456  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:12.419512  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:12.445137  481598 cri.go:89] found id: ""
	I1216 04:39:12.445151  481598 logs.go:282] 0 containers: []
	W1216 04:39:12.445159  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:12.445167  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:12.445177  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:12.510786  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:12.510805  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:12.525785  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:12.525801  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:12.590602  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:12.581842   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.582992   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.584571   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.585097   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:12.586642   15457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:12.590616  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:12.590627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:12.664304  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:12.664331  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:15.192618  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:15.202786  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:15.202855  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:15.227787  481598 cri.go:89] found id: ""
	I1216 04:39:15.227801  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.227808  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:15.227813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:15.227875  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:15.254490  481598 cri.go:89] found id: ""
	I1216 04:39:15.254505  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.254512  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:15.254517  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:15.254578  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:15.280037  481598 cri.go:89] found id: ""
	I1216 04:39:15.280052  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.280060  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:15.280064  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:15.280124  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:15.306278  481598 cri.go:89] found id: ""
	I1216 04:39:15.306295  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.306303  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:15.306308  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:15.306368  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:15.338132  481598 cri.go:89] found id: ""
	I1216 04:39:15.338146  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.338152  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:15.338157  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:15.338215  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:15.365557  481598 cri.go:89] found id: ""
	I1216 04:39:15.365571  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.365578  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:15.365583  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:15.365640  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:15.394440  481598 cri.go:89] found id: ""
	I1216 04:39:15.394454  481598 logs.go:282] 0 containers: []
	W1216 04:39:15.394461  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:15.394469  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:15.394478  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:15.460219  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:15.460240  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:15.475344  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:15.475362  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:15.543524  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:15.535805   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.536549   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538069   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.538584   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:15.539605   15561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:15.543542  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:15.543552  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:15.611736  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:15.611757  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:18.147208  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:18.157570  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:18.157629  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:18.182325  481598 cri.go:89] found id: ""
	I1216 04:39:18.182339  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.182346  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:18.182351  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:18.182409  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:18.211344  481598 cri.go:89] found id: ""
	I1216 04:39:18.211358  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.211365  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:18.211370  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:18.211430  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:18.236501  481598 cri.go:89] found id: ""
	I1216 04:39:18.236518  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.236525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:18.236533  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:18.236600  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:18.261000  481598 cri.go:89] found id: ""
	I1216 04:39:18.261013  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.261020  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:18.261025  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:18.261112  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:18.286887  481598 cri.go:89] found id: ""
	I1216 04:39:18.286901  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.286908  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:18.286913  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:18.286970  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:18.311492  481598 cri.go:89] found id: ""
	I1216 04:39:18.311506  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.311514  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:18.311519  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:18.311577  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:18.350624  481598 cri.go:89] found id: ""
	I1216 04:39:18.350638  481598 logs.go:282] 0 containers: []
	W1216 04:39:18.350645  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:18.350652  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:18.350663  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:18.424437  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:18.424461  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:18.439409  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:18.439425  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:18.503408  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:18.495202   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.495809   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.497576   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.498108   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:18.499707   15666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:18.503426  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:18.503439  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:18.572236  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:18.572256  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.099923  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:21.109895  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:21.109959  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:21.135095  481598 cri.go:89] found id: ""
	I1216 04:39:21.135110  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.135117  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:21.135122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:21.135188  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:21.159978  481598 cri.go:89] found id: ""
	I1216 04:39:21.159991  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.159998  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:21.160002  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:21.160060  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:21.184861  481598 cri.go:89] found id: ""
	I1216 04:39:21.184875  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.184882  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:21.184887  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:21.184943  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:21.215362  481598 cri.go:89] found id: ""
	I1216 04:39:21.215376  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.215383  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:21.215388  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:21.215451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:21.241352  481598 cri.go:89] found id: ""
	I1216 04:39:21.241366  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.241373  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:21.241378  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:21.241435  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:21.270124  481598 cri.go:89] found id: ""
	I1216 04:39:21.270139  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.270146  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:21.270151  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:21.270210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:21.294836  481598 cri.go:89] found id: ""
	I1216 04:39:21.294850  481598 logs.go:282] 0 containers: []
	W1216 04:39:21.294857  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:21.294865  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:21.294876  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:21.340249  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:21.340265  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:21.415950  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:21.415975  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:21.431603  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:21.431619  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:21.496240  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:21.487807   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.488585   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490277   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.490834   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:21.492427   15783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:21.496250  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:21.496260  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.064476  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:24.075218  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:24.075282  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:24.100790  481598 cri.go:89] found id: ""
	I1216 04:39:24.100804  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.100810  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:24.100815  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:24.100870  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:24.127285  481598 cri.go:89] found id: ""
	I1216 04:39:24.127301  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.127308  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:24.127312  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:24.127371  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:24.156427  481598 cri.go:89] found id: ""
	I1216 04:39:24.156440  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.156447  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:24.156452  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:24.156513  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:24.182130  481598 cri.go:89] found id: ""
	I1216 04:39:24.182146  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.182154  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:24.182159  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:24.182216  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:24.207363  481598 cri.go:89] found id: ""
	I1216 04:39:24.207378  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.207385  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:24.207390  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:24.207451  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:24.235986  481598 cri.go:89] found id: ""
	I1216 04:39:24.236001  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.236017  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:24.236022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:24.236077  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:24.260561  481598 cri.go:89] found id: ""
	I1216 04:39:24.260582  481598 logs.go:282] 0 containers: []
	W1216 04:39:24.260589  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:24.260597  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:24.260608  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:24.328717  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:24.328738  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:24.362340  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:24.362357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:24.435463  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:24.435483  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:24.452196  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:24.452212  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:24.517484  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:24.509289   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.509913   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511537   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.511992   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:24.513587   15889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.018375  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:27.028921  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:27.028982  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:27.058968  481598 cri.go:89] found id: ""
	I1216 04:39:27.058984  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.058991  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:27.058996  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:27.059058  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:27.086788  481598 cri.go:89] found id: ""
	I1216 04:39:27.086802  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.086808  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:27.086815  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:27.086872  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:27.111593  481598 cri.go:89] found id: ""
	I1216 04:39:27.111607  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.111629  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:27.111635  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:27.111700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:27.135786  481598 cri.go:89] found id: ""
	I1216 04:39:27.135800  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.135816  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:27.135822  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:27.135881  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:27.175564  481598 cri.go:89] found id: ""
	I1216 04:39:27.175577  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.175593  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:27.175598  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:27.175670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:27.201020  481598 cri.go:89] found id: ""
	I1216 04:39:27.201034  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.201041  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:27.201048  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:27.201123  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:27.226608  481598 cri.go:89] found id: ""
	I1216 04:39:27.226622  481598 logs.go:282] 0 containers: []
	W1216 04:39:27.226629  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:27.226637  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:27.226648  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:27.292121  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:27.292140  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:27.307824  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:27.307840  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:27.382707  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:27.371394   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.372197   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374043   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.374339   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:27.375852   15976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:27.382717  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:27.382728  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:27.450745  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:27.450764  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:29.981824  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:29.991752  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:29.991812  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:30.027720  481598 cri.go:89] found id: ""
	I1216 04:39:30.027737  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.027744  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:30.027749  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:30.027824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:30.064834  481598 cri.go:89] found id: ""
	I1216 04:39:30.064862  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.064869  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:30.064875  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:30.064942  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:30.092327  481598 cri.go:89] found id: ""
	I1216 04:39:30.092341  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.092349  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:30.092354  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:30.092415  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:30.119568  481598 cri.go:89] found id: ""
	I1216 04:39:30.119583  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.119590  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:30.119595  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:30.119654  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:30.145948  481598 cri.go:89] found id: ""
	I1216 04:39:30.145962  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.145970  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:30.145974  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:30.146037  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:30.174055  481598 cri.go:89] found id: ""
	I1216 04:39:30.174069  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.174077  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:30.174082  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:30.174148  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:30.200676  481598 cri.go:89] found id: ""
	I1216 04:39:30.200704  481598 logs.go:282] 0 containers: []
	W1216 04:39:30.200711  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:30.200719  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:30.200729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:30.273177  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:30.273199  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:30.307730  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:30.307749  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:30.380128  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:30.380149  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:30.398650  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:30.398668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:30.464666  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:30.456212   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.456700   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458422   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.458756   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:30.460283   16099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:32.965244  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:32.975770  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:32.975829  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:33.008069  481598 cri.go:89] found id: ""
	I1216 04:39:33.008086  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.008094  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:33.008099  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:33.008180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:33.035228  481598 cri.go:89] found id: ""
	I1216 04:39:33.035242  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.035249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:33.035254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:33.035319  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:33.062504  481598 cri.go:89] found id: ""
	I1216 04:39:33.062518  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.062525  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:33.062530  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:33.062588  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:33.088441  481598 cri.go:89] found id: ""
	I1216 04:39:33.088455  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.088462  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:33.088467  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:33.088529  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:33.119260  481598 cri.go:89] found id: ""
	I1216 04:39:33.119274  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.119281  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:33.119286  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:33.119346  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:33.150552  481598 cri.go:89] found id: ""
	I1216 04:39:33.150567  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.150575  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:33.150580  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:33.150644  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:33.180001  481598 cri.go:89] found id: ""
	I1216 04:39:33.180016  481598 logs.go:282] 0 containers: []
	W1216 04:39:33.180023  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:33.180030  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:33.180040  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:33.248727  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:33.248752  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:33.277683  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:33.277700  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:33.350702  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:33.350721  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:33.369208  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:33.369248  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:33.439765  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:33.431154   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.432026   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.433573   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.434049   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:33.435592   16207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:35.940031  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:35.950049  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:35.950107  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:35.975196  481598 cri.go:89] found id: ""
	I1216 04:39:35.975209  481598 logs.go:282] 0 containers: []
	W1216 04:39:35.975216  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:35.975221  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:35.975277  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:36.001797  481598 cri.go:89] found id: ""
	I1216 04:39:36.001812  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.001820  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:36.001826  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:36.001890  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:36.036431  481598 cri.go:89] found id: ""
	I1216 04:39:36.036446  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.036454  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:36.036459  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:36.036525  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:36.063963  481598 cri.go:89] found id: ""
	I1216 04:39:36.063978  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.063985  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:36.063990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:36.064048  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:36.090639  481598 cri.go:89] found id: ""
	I1216 04:39:36.090653  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.090660  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:36.090665  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:36.090724  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:36.116793  481598 cri.go:89] found id: ""
	I1216 04:39:36.116807  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.116816  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:36.116821  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:36.116880  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:36.141959  481598 cri.go:89] found id: ""
	I1216 04:39:36.141972  481598 logs.go:282] 0 containers: []
	W1216 04:39:36.141979  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:36.141986  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:36.141996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:36.208976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:36.208996  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:36.239530  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:36.239546  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:36.305220  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:36.305245  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:36.322139  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:36.322169  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:36.399936  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:36.391294   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.391711   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.393476   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.394135   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:36.395762   16310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:38.900194  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:38.910569  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:38.910632  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:38.936840  481598 cri.go:89] found id: ""
	I1216 04:39:38.936854  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.936861  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:38.936867  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:38.936926  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:38.969994  481598 cri.go:89] found id: ""
	I1216 04:39:38.970008  481598 logs.go:282] 0 containers: []
	W1216 04:39:38.970016  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:38.970021  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:38.970092  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:39.000246  481598 cri.go:89] found id: ""
	I1216 04:39:39.000260  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.000267  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:39.000272  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:39.000328  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:39.028053  481598 cri.go:89] found id: ""
	I1216 04:39:39.028068  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.028075  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:39.028080  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:39.028139  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:39.053044  481598 cri.go:89] found id: ""
	I1216 04:39:39.053058  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.053100  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:39.053107  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:39.053165  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:39.078212  481598 cri.go:89] found id: ""
	I1216 04:39:39.078226  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.078234  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:39.078239  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:39.078296  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:39.103968  481598 cri.go:89] found id: ""
	I1216 04:39:39.103982  481598 logs.go:282] 0 containers: []
	W1216 04:39:39.103994  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:39.104001  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:39.104011  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:39.171261  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:39.171283  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:39.203918  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:39.203937  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:39.269162  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:39.269183  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:39.283640  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:39.283658  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:39.357490  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:39.349083   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.349811   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351336   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.351851   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:39.353466   16408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:41.857783  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:41.868156  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:41.868218  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:41.896097  481598 cri.go:89] found id: ""
	I1216 04:39:41.896111  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.896118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:41.896123  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:41.896183  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:41.923730  481598 cri.go:89] found id: ""
	I1216 04:39:41.923745  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.923752  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:41.923758  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:41.923814  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:41.948996  481598 cri.go:89] found id: ""
	I1216 04:39:41.949010  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.949017  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:41.949022  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:41.949098  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:41.973820  481598 cri.go:89] found id: ""
	I1216 04:39:41.973834  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.973841  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:41.973845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:41.973901  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:41.999809  481598 cri.go:89] found id: ""
	I1216 04:39:41.999832  481598 logs.go:282] 0 containers: []
	W1216 04:39:41.999839  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:41.999845  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:41.999910  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:42.032190  481598 cri.go:89] found id: ""
	I1216 04:39:42.032216  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.032224  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:42.032229  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:42.032301  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:42.059655  481598 cri.go:89] found id: ""
	I1216 04:39:42.059679  481598 logs.go:282] 0 containers: []
	W1216 04:39:42.059687  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:42.059694  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:42.059705  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:42.127853  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:42.127875  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:42.146370  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:42.146393  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:42.223415  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:42.212968   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.213792   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.215670   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.216278   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:42.218024   16499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:42.223444  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:42.223457  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:42.304338  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:42.304368  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:44.847911  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:44.858741  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:44.858820  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:44.884095  481598 cri.go:89] found id: ""
	I1216 04:39:44.884110  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.884118  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:44.884122  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:44.884181  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:44.911877  481598 cri.go:89] found id: ""
	I1216 04:39:44.911891  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.911898  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:44.911902  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:44.911960  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:44.938117  481598 cri.go:89] found id: ""
	I1216 04:39:44.938132  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.938139  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:44.938144  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:44.938204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:44.972779  481598 cri.go:89] found id: ""
	I1216 04:39:44.972793  481598 logs.go:282] 0 containers: []
	W1216 04:39:44.972800  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:44.972805  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:44.972862  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:45.000033  481598 cri.go:89] found id: ""
	I1216 04:39:45.000047  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.000054  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:45.000060  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:45.000121  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:45.072214  481598 cri.go:89] found id: ""
	I1216 04:39:45.072234  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.072244  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:45.072250  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:45.072325  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:45.112612  481598 cri.go:89] found id: ""
	I1216 04:39:45.112632  481598 logs.go:282] 0 containers: []
	W1216 04:39:45.112641  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:45.112653  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:45.112668  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:45.193381  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:45.193407  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:45.244205  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:45.244225  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:45.324983  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:45.325004  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:45.340857  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:45.340880  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:45.423270  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:45.414685   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.415307   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.416945   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.417545   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:45.419306   16622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:47.923526  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:47.933779  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:47.933853  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:47.960777  481598 cri.go:89] found id: ""
	I1216 04:39:47.960793  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.960800  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:47.960804  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:47.960863  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:47.990010  481598 cri.go:89] found id: ""
	I1216 04:39:47.990024  481598 logs.go:282] 0 containers: []
	W1216 04:39:47.990031  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:47.990036  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:47.990094  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:48.021881  481598 cri.go:89] found id: ""
	I1216 04:39:48.021897  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.021908  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:48.021914  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:48.021978  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:48.048841  481598 cri.go:89] found id: ""
	I1216 04:39:48.048860  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.048867  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:48.048872  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:48.048947  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:48.074988  481598 cri.go:89] found id: ""
	I1216 04:39:48.075002  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.075010  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:48.075015  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:48.075073  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:48.101288  481598 cri.go:89] found id: ""
	I1216 04:39:48.101303  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.101320  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:48.101325  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:48.101383  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:48.126469  481598 cri.go:89] found id: ""
	I1216 04:39:48.126483  481598 logs.go:282] 0 containers: []
	W1216 04:39:48.126489  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:48.126497  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:48.126508  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:48.160206  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:48.160222  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:48.226864  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:48.226883  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:48.241861  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:48.241879  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:48.311183  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:48.302762   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.303348   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.304889   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.305401   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:48.306868   16722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:48.311197  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:48.311208  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:50.890106  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:50.900561  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:50.900623  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:50.925477  481598 cri.go:89] found id: ""
	I1216 04:39:50.925491  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.925498  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:50.925503  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:50.925573  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:50.950590  481598 cri.go:89] found id: ""
	I1216 04:39:50.950604  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.950611  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:50.950615  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:50.950670  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:50.975563  481598 cri.go:89] found id: ""
	I1216 04:39:50.975577  481598 logs.go:282] 0 containers: []
	W1216 04:39:50.975584  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:50.975588  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:50.975649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:51.001446  481598 cri.go:89] found id: ""
	I1216 04:39:51.001460  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.001468  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:51.001473  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:51.001546  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:51.036808  481598 cri.go:89] found id: ""
	I1216 04:39:51.036822  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.036830  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:51.036834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:51.036893  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:51.063122  481598 cri.go:89] found id: ""
	I1216 04:39:51.063136  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.063143  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:51.063148  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:51.063204  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:51.091909  481598 cri.go:89] found id: ""
	I1216 04:39:51.091924  481598 logs.go:282] 0 containers: []
	W1216 04:39:51.091931  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:51.091938  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:51.091949  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:51.157330  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:51.157357  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:51.172521  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:51.172537  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:51.237104  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:51.228688   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.229354   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.230964   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.231596   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:51.233259   16815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:51.237115  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:51.237126  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:51.310463  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:51.310484  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:53.856519  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:53.866849  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:53.866907  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:53.892183  481598 cri.go:89] found id: ""
	I1216 04:39:53.892197  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.892204  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:53.892210  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:53.892269  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:53.917961  481598 cri.go:89] found id: ""
	I1216 04:39:53.917975  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.917983  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:53.917987  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:53.918046  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:53.943214  481598 cri.go:89] found id: ""
	I1216 04:39:53.943228  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.943235  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:53.943240  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:53.943298  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:53.968696  481598 cri.go:89] found id: ""
	I1216 04:39:53.968710  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.968717  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:53.968722  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:53.968778  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:53.993878  481598 cri.go:89] found id: ""
	I1216 04:39:53.993892  481598 logs.go:282] 0 containers: []
	W1216 04:39:53.993900  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:53.993905  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:53.993961  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:54.021892  481598 cri.go:89] found id: ""
	I1216 04:39:54.021911  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.021918  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:54.021924  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:54.021989  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:54.048339  481598 cri.go:89] found id: ""
	I1216 04:39:54.048353  481598 logs.go:282] 0 containers: []
	W1216 04:39:54.048360  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:54.048368  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:54.048379  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:54.115518  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:54.107249   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.107772   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109446   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.109968   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:54.111592   16915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:54.115529  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:54.115540  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:54.184110  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:54.184130  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:54.212611  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:54.212627  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:54.280294  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:54.280314  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:56.795621  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:56.805834  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:56.805904  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:56.831835  481598 cri.go:89] found id: ""
	I1216 04:39:56.831850  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.831857  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:56.831862  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:56.831920  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:56.857986  481598 cri.go:89] found id: ""
	I1216 04:39:56.858000  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.858007  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:56.858012  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:56.858086  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:56.884049  481598 cri.go:89] found id: ""
	I1216 04:39:56.884062  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.884069  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:56.884074  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:56.884129  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:56.909467  481598 cri.go:89] found id: ""
	I1216 04:39:56.909481  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.909488  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:56.909493  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:56.909553  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:56.935361  481598 cri.go:89] found id: ""
	I1216 04:39:56.935375  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.935382  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:56.935387  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:56.935444  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:56.963724  481598 cri.go:89] found id: ""
	I1216 04:39:56.963738  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.963745  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:56.963750  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:56.963807  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:56.988482  481598 cri.go:89] found id: ""
	I1216 04:39:56.988495  481598 logs.go:282] 0 containers: []
	W1216 04:39:56.988502  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:56.988510  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:56.988520  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:39:57.057566  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:39:57.057587  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:39:57.073142  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:39:57.073160  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:39:57.138961  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:39:57.130646   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.131071   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.132726   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.133151   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:39:57.134926   17025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:39:57.138972  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:39:57.138983  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:39:57.206475  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:39:57.206497  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:39:59.739022  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:39:59.749638  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:39:59.749700  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:39:59.776094  481598 cri.go:89] found id: ""
	I1216 04:39:59.776109  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.776115  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:39:59.776120  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:39:59.776180  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:39:59.802606  481598 cri.go:89] found id: ""
	I1216 04:39:59.802621  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.802628  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:39:59.802634  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:39:59.802697  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:39:59.829710  481598 cri.go:89] found id: ""
	I1216 04:39:59.829724  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.829731  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:39:59.829736  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:39:59.829808  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:39:59.859658  481598 cri.go:89] found id: ""
	I1216 04:39:59.859673  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.859680  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:39:59.859685  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:39:59.859742  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:39:59.884817  481598 cri.go:89] found id: ""
	I1216 04:39:59.884831  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.884838  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:39:59.884843  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:39:59.884906  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:39:59.911195  481598 cri.go:89] found id: ""
	I1216 04:39:59.911210  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.911217  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:39:59.911223  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:39:59.911283  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:39:59.936870  481598 cri.go:89] found id: ""
	I1216 04:39:59.936885  481598 logs.go:282] 0 containers: []
	W1216 04:39:59.936891  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:39:59.936899  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:39:59.936909  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:00.003032  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:00.003054  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:00.086753  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:00.086772  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:00.242338  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:00.228915   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.229766   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.232549   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.234186   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:00.236627   17134 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:00.242351  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:00.242395  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:00.380976  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:00.381000  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:02.964729  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:02.974990  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:02.975051  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:03.001443  481598 cri.go:89] found id: ""
	I1216 04:40:03.001458  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.001466  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:03.001471  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:03.001538  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:03.030227  481598 cri.go:89] found id: ""
	I1216 04:40:03.030241  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.030249  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:03.030254  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:03.030315  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:03.056406  481598 cri.go:89] found id: ""
	I1216 04:40:03.056421  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.056429  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:03.056439  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:03.056500  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:03.084430  481598 cri.go:89] found id: ""
	I1216 04:40:03.084452  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.084460  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:03.084465  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:03.084527  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:03.112058  481598 cri.go:89] found id: ""
	I1216 04:40:03.112072  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.112079  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:03.112084  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:03.112150  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:03.139147  481598 cri.go:89] found id: ""
	I1216 04:40:03.139161  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.139168  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:03.139173  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:03.139231  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:03.170943  481598 cri.go:89] found id: ""
	I1216 04:40:03.170958  481598 logs.go:282] 0 containers: []
	W1216 04:40:03.170965  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:03.170973  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:03.170984  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:03.237388  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:03.237409  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:03.252191  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:03.252213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:03.315123  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:03.306446   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.307653   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.308545   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.309495   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:03.310189   17240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:03.315132  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:03.315143  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:03.388848  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:03.388869  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:05.923315  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:05.934216  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:40:05.934292  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:40:05.964778  481598 cri.go:89] found id: ""
	I1216 04:40:05.964791  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.964798  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:40:05.964813  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:40:05.964895  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:40:05.991403  481598 cri.go:89] found id: ""
	I1216 04:40:05.991417  481598 logs.go:282] 0 containers: []
	W1216 04:40:05.991424  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:40:05.991429  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:40:05.991486  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:40:06.019838  481598 cri.go:89] found id: ""
	I1216 04:40:06.019853  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.019860  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:40:06.019865  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:40:06.019927  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:40:06.046554  481598 cri.go:89] found id: ""
	I1216 04:40:06.046569  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.046580  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:40:06.046585  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:40:06.046649  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:40:06.071958  481598 cri.go:89] found id: ""
	I1216 04:40:06.071973  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.071980  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:40:06.071985  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:40:06.072040  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:40:06.099079  481598 cri.go:89] found id: ""
	I1216 04:40:06.099094  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.099101  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:40:06.099106  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:40:06.099170  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:40:06.126168  481598 cri.go:89] found id: ""
	I1216 04:40:06.126188  481598 logs.go:282] 0 containers: []
	W1216 04:40:06.126195  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:40:06.126202  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:40:06.126213  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:40:06.192591  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:40:06.192611  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:40:06.207708  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:40:06.207729  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:40:06.274064  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:40:06.264712   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.265524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.267524   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.268552   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:40:06.269537   17344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:40:06.274074  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:40:06.274086  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:40:06.343044  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:40:06.343066  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:40:08.873218  481598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:40:08.883654  481598 kubeadm.go:602] duration metric: took 4m3.325303057s to restartPrimaryControlPlane
	W1216 04:40:08.883714  481598 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 04:40:08.883788  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:40:09.294329  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:40:09.307484  481598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:40:09.315713  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:40:09.315769  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:40:09.323612  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:40:09.323622  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:40:09.323675  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:40:09.331783  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:40:09.331838  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:40:09.339284  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:40:09.346837  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:40:09.346891  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:40:09.354493  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.362269  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:40:09.362328  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:40:09.369970  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:40:09.378044  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:40:09.378103  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:40:09.385765  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:40:09.424060  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:40:09.424358  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:40:09.495076  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:40:09.495141  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:40:09.495181  481598 kubeadm.go:319] OS: Linux
	I1216 04:40:09.495224  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:40:09.495271  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:40:09.495318  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:40:09.495365  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:40:09.495412  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:40:09.495459  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:40:09.495502  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:40:09.495550  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:40:09.495596  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:40:09.563458  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:40:09.563582  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:40:09.563682  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:40:09.571744  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:40:09.577424  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:40:09.577526  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:40:09.577597  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:40:09.577679  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:40:09.577744  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:40:09.577819  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:40:09.577878  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:40:09.577951  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:40:09.578022  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:40:09.578105  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:40:09.578188  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:40:09.578235  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:40:09.578291  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:40:09.899760  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:40:10.102481  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:40:10.266020  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:40:10.669469  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:40:11.526452  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:40:11.527018  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:40:11.530635  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:40:11.533764  481598 out.go:252]   - Booting up control plane ...
	I1216 04:40:11.533860  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:40:11.533937  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:40:11.534462  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:40:11.549423  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:40:11.549689  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:40:11.557342  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:40:11.557601  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:40:11.557642  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:40:11.689632  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:40:11.689752  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:44:11.687962  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001213504s
	I1216 04:44:11.687985  481598 kubeadm.go:319] 
	I1216 04:44:11.688045  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:44:11.688077  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:44:11.688181  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:44:11.688185  481598 kubeadm.go:319] 
	I1216 04:44:11.688293  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:44:11.688324  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:44:11.688354  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:44:11.688357  481598 kubeadm.go:319] 
	I1216 04:44:11.693131  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:11.693558  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:11.693669  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:44:11.693904  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:44:11.693910  481598 kubeadm.go:319] 
	I1216 04:44:11.693977  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 04:44:11.694089  481598 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001213504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:44:11.694190  481598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 04:44:12.104466  481598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:44:12.116829  481598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:44:12.116881  481598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:44:12.124364  481598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:44:12.124372  481598 kubeadm.go:158] found existing configuration files:
	
	I1216 04:44:12.124420  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:44:12.131751  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:44:12.131807  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:44:12.138938  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:44:12.146429  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:44:12.146482  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:44:12.153782  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.161218  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:44:12.161270  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:44:12.168781  481598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:44:12.176219  481598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:44:12.176271  481598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:44:12.183435  481598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:44:12.295783  481598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 04:44:12.296200  481598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:44:12.361811  481598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:48:14.074988  481598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 04:48:14.075012  481598 kubeadm.go:319] 
	I1216 04:48:14.075081  481598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:48:14.079141  481598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:48:14.079195  481598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:48:14.079284  481598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:48:14.079338  481598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 04:48:14.079372  481598 kubeadm.go:319] OS: Linux
	I1216 04:48:14.079416  481598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:48:14.079463  481598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:48:14.079508  481598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:48:14.079555  481598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:48:14.079602  481598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:48:14.079664  481598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:48:14.079709  481598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:48:14.079755  481598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:48:14.079801  481598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:48:14.079872  481598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:48:14.079966  481598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:48:14.080055  481598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:48:14.080117  481598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:48:14.083166  481598 out.go:252]   - Generating certificates and keys ...
	I1216 04:48:14.083255  481598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:48:14.083327  481598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:48:14.083402  481598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:48:14.083461  481598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:48:14.083529  481598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:48:14.083582  481598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:48:14.083644  481598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:48:14.083704  481598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:48:14.083778  481598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:48:14.083849  481598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:48:14.083886  481598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:48:14.083941  481598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:48:14.083991  481598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:48:14.084046  481598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:48:14.084103  481598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:48:14.084165  481598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:48:14.084218  481598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:48:14.084301  481598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:48:14.084366  481598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:48:14.087214  481598 out.go:252]   - Booting up control plane ...
	I1216 04:48:14.087326  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:48:14.087404  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:48:14.087497  481598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:48:14.087610  481598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:48:14.087707  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:48:14.087811  481598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:48:14.087895  481598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:48:14.087932  481598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:48:14.088082  481598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:48:14.088189  481598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:48:14.088268  481598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077674s
	I1216 04:48:14.088271  481598 kubeadm.go:319] 
	I1216 04:48:14.088334  481598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:48:14.088366  481598 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:48:14.088482  481598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:48:14.088486  481598 kubeadm.go:319] 
	I1216 04:48:14.088595  481598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:48:14.088637  481598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:48:14.088668  481598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:48:14.088677  481598 kubeadm.go:319] 
	I1216 04:48:14.088733  481598 kubeadm.go:403] duration metric: took 12m8.569239535s to StartCluster
	I1216 04:48:14.088763  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:48:14.088824  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:48:14.121113  481598 cri.go:89] found id: ""
	I1216 04:48:14.121140  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.121148  481598 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:48:14.121153  481598 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:48:14.121210  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:48:14.150916  481598 cri.go:89] found id: ""
	I1216 04:48:14.150931  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.150938  481598 logs.go:284] No container was found matching "etcd"
	I1216 04:48:14.150943  481598 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:48:14.151005  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:48:14.177693  481598 cri.go:89] found id: ""
	I1216 04:48:14.177709  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.177716  481598 logs.go:284] No container was found matching "coredns"
	I1216 04:48:14.177721  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:48:14.177782  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:48:14.202900  481598 cri.go:89] found id: ""
	I1216 04:48:14.202914  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.202921  481598 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:48:14.202926  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:48:14.202983  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:48:14.229346  481598 cri.go:89] found id: ""
	I1216 04:48:14.229360  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.229367  481598 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:48:14.229372  481598 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:48:14.229433  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:48:14.255869  481598 cri.go:89] found id: ""
	I1216 04:48:14.255884  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.255891  481598 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:48:14.255896  481598 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:48:14.255953  481598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:48:14.282757  481598 cri.go:89] found id: ""
	I1216 04:48:14.282772  481598 logs.go:282] 0 containers: []
	W1216 04:48:14.282779  481598 logs.go:284] No container was found matching "kindnet"
	I1216 04:48:14.282787  481598 logs.go:123] Gathering logs for kubelet ...
	I1216 04:48:14.282797  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:48:14.349482  481598 logs.go:123] Gathering logs for dmesg ...
	I1216 04:48:14.349503  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 04:48:14.364748  481598 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:48:14.364765  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:48:14.440728  481598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:48:14.431516   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.432409   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434236   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.434802   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:48:14.436554   21114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:48:14.440741  481598 logs.go:123] Gathering logs for CRI-O ...
	I1216 04:48:14.440751  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 04:48:14.515072  481598 logs.go:123] Gathering logs for container status ...
	I1216 04:48:14.515092  481598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 04:48:14.544694  481598 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:48:14.544736  481598 out.go:285] * 
	W1216 04:48:14.544844  481598 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.544900  481598 out.go:285] * 
	W1216 04:48:14.547108  481598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:48:14.553105  481598 out.go:203] 
	W1216 04:48:14.555966  481598 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077674s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:48:14.556016  481598 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:48:14.556038  481598 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:48:14.559052  481598 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714709668Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.71475743Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714823679Z" level=info msg="Create NRI interface"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714952197Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714978487Z" level=info msg="runtime interface created"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.714994996Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715003956Z" level=info msg="runtime interface starting up..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715015205Z" level=info msg="starting plugins..."
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715027849Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 04:36:03 functional-763073 crio[9970]: time="2025-12-16T04:36:03.715097331Z" level=info msg="No systemd watchdog enabled"
	Dec 16 04:36:03 functional-763073 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.566937768Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5b381738-c32a-40c6-affb-c4aad9d726b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.567803155Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=7302f23d-29b3-4ddc-ad63-9af170663562 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.568336568Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=470a4814-2c77-4f21-97ca-d4b2d8b367c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.56886276Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e3d63019-6956-4b8d-9795-5e45ed470016 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.569572699Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1715eb88-0ece-47e1-8cf4-08ec329b9548 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570118822Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=17ac1632-ceef-4623-82d4-95709ece00f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:40:09 functional-763073 crio[9970]: time="2025-12-16T04:40:09.570664255Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=9e736680-8e53-4709-9714-232fbfa617ef name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.365457664Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=66aba16f-2286-4957-9589-3f6b308f0653 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366373784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a0b09546-fe1b-440e-8076-598a1e2930d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366892723Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=976ba277-fbb2-4db1-8ee0-ce87f329b2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367464412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15d708f7-0c1f-4e61-bde7-afc75b1dc430 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367935941Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d28f296-8f48-4bb2-bf27-13281f9a3b27 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368429435Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=82541142-23b6-4f48-816e-5b740356cd35 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368875848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=29b0dee6-8ec8-4ecc-822d-bf19bcc0e034 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:50:03.581118   22556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:03.581838   22556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:03.583414   22556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:03.584005   22556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:03.585518   22556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:50:03 up  3:32,  0 user,  load average: 0.25, 0.29, 0.46
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:50:01 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:02 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1105.
	Dec 16 04:50:02 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:02 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:02 functional-763073 kubelet[22452]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:02 functional-763073 kubelet[22452]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:02 functional-763073 kubelet[22452]: E1216 04:50:02.132106   22452 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:02 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:02 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:02 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1106.
	Dec 16 04:50:02 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:02 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:02 functional-763073 kubelet[22472]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:02 functional-763073 kubelet[22472]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:02 functional-763073 kubelet[22472]: E1216 04:50:02.891932   22472 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:02 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:02 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:03 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1107.
	Dec 16 04:50:03 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:03 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:03 functional-763073 kubelet[22560]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:03 functional-763073 kubelet[22560]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:03 functional-763073 kubelet[22560]: E1216 04:50:03.633160   22560 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:03 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:03 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (355.884828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 04:48:32.723668  441727 retry.go:31] will retry after 2.370670647s: Temporary Error: Get "http://10.103.214.36": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 04:48:45.095733  441727 retry.go:31] will retry after 2.447195345s: Temporary Error: Get "http://10.103.214.36": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 04:48:57.544112  441727 retry.go:31] will retry after 5.232755644s: Temporary Error: Get "http://10.103.214.36": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 04:49:12.777397  441727 retry.go:31] will retry after 11.471473324s: Temporary Error: Get "http://10.103.214.36": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1216 04:49:34.249939  441727 retry.go:31] will retry after 17.535501847s: Temporary Error: Get "http://10.103.214.36": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1216 04:51:25.287218  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (300.281318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (289.344806ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-763073 image load --daemon kicbase/echo-server:functional-763073 --alsologtostderr                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image save kicbase/echo-server:functional-763073 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image rm kicbase/echo-server:functional-763073 --alsologtostderr                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image save --daemon kicbase/echo-server:functional-763073 --alsologtostderr                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh sudo cat /etc/ssl/certs/441727.pem                                                                                                  │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh sudo cat /usr/share/ca-certificates/441727.pem                                                                                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh sudo cat /etc/ssl/certs/4417272.pem                                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh sudo cat /usr/share/ca-certificates/4417272.pem                                                                                     │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh sudo cat /etc/test/nested/copy/441727/hosts                                                                                         │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls --format short --alsologtostderr                                                                                               │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls --format yaml --alsologtostderr                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh            │ functional-763073 ssh pgrep buildkitd                                                                                                                     │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ image          │ functional-763073 image build -t localhost/my-image:functional-763073 testdata/build --alsologtostderr                                                    │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls --format json --alsologtostderr                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image          │ functional-763073 image ls --format table --alsologtostderr                                                                                               │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ update-context │ functional-763073 update-context --alsologtostderr -v=2                                                                                                   │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ update-context │ functional-763073 update-context --alsologtostderr -v=2                                                                                                   │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ update-context │ functional-763073 update-context --alsologtostderr -v=2                                                                                                   │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:50:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:50:19.275087  498832 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:50:19.275230  498832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.275256  498832 out.go:374] Setting ErrFile to fd 2...
	I1216 04:50:19.275275  498832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.275561  498832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:50:19.275974  498832 out.go:368] Setting JSON to false
	I1216 04:50:19.276868  498832 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12766,"bootTime":1765847854,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:50:19.276969  498832 start.go:143] virtualization:  
	I1216 04:50:19.280401  498832 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:50:19.283297  498832 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:50:19.283468  498832 notify.go:221] Checking for updates...
	I1216 04:50:19.289162  498832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:50:19.292148  498832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:50:19.295134  498832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:50:19.297944  498832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:50:19.300988  498832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:50:19.304379  498832 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:50:19.305004  498832 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:50:19.352771  498832 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:50:19.352964  498832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.426222  498832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.416940994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.426334  498832 docker.go:319] overlay module found
	I1216 04:50:19.429570  498832 out.go:179] * Using the docker driver based on existing profile
	I1216 04:50:19.432360  498832 start.go:309] selected driver: docker
	I1216 04:50:19.432375  498832 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.432474  498832 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:50:19.432581  498832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.497183  498832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.487722858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.497651  498832 cni.go:84] Creating CNI manager for ""
	I1216 04:50:19.497714  498832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:50:19.497750  498832 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.500776  498832 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.365457664Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=66aba16f-2286-4957-9589-3f6b308f0653 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366373784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a0b09546-fe1b-440e-8076-598a1e2930d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366892723Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=976ba277-fbb2-4db1-8ee0-ce87f329b2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367464412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15d708f7-0c1f-4e61-bde7-afc75b1dc430 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367935941Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d28f296-8f48-4bb2-bf27-13281f9a3b27 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368429435Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=82541142-23b6-4f48-816e-5b740356cd35 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368875848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=29b0dee6-8ec8-4ecc-822d-bf19bcc0e034 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.971042577Z" level=info msg="Checking image status: kicbase/echo-server:functional-763073" id=70a7356a-5558-4459-a73a-e89eebe7e1b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.97127796Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.971332812Z" level=info msg="Image kicbase/echo-server:functional-763073 not found" id=70a7356a-5558-4459-a73a-e89eebe7e1b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.97140999Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-763073 found" id=70a7356a-5558-4459-a73a-e89eebe7e1b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.996458455Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-763073" id=976812f7-10f3-4e01-8ec1-eb125d206dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.996617455Z" level=info msg="Image docker.io/kicbase/echo-server:functional-763073 not found" id=976812f7-10f3-4e01-8ec1-eb125d206dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.996672791Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-763073 found" id=976812f7-10f3-4e01-8ec1-eb125d206dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:23 functional-763073 crio[9970]: time="2025-12-16T04:50:23.022513879Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-763073" id=c79ed0cd-703d-4525-bbed-3275903a7793 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:23 functional-763073 crio[9970]: time="2025-12-16T04:50:23.022687666Z" level=info msg="Image localhost/kicbase/echo-server:functional-763073 not found" id=c79ed0cd-703d-4525-bbed-3275903a7793 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:23 functional-763073 crio[9970]: time="2025-12-16T04:50:23.02273391Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-763073 found" id=c79ed0cd-703d-4525-bbed-3275903a7793 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050577763Z" level=info msg="Checking image status: kicbase/echo-server:functional-763073" id=bb3f5e13-09e3-439c-b578-1ad3b1c28a1d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050780424Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050838336Z" level=info msg="Image kicbase/echo-server:functional-763073 not found" id=bb3f5e13-09e3-439c-b578-1ad3b1c28a1d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050927732Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-763073 found" id=bb3f5e13-09e3-439c-b578-1ad3b1c28a1d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.089986356Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-763073" id=e89fb5a5-1d43-4cc6-8ef4-b40b006e0259 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.090160963Z" level=info msg="Image docker.io/kicbase/echo-server:functional-763073 not found" id=e89fb5a5-1d43-4cc6-8ef4-b40b006e0259 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.090205345Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-763073 found" id=e89fb5a5-1d43-4cc6-8ef4-b40b006e0259 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.12007273Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-763073" id=cb604810-945f-4549-afc7-3a61db5db53a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:52:24.332330   25384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:52:24.332870   25384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:52:24.334579   25384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:52:24.335119   25384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:52:24.336801   25384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:52:24 up  3:34,  0 user,  load average: 0.54, 0.55, 0.55
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:52:21 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:52:22 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 16 04:52:22 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:52:22 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:52:22 functional-763073 kubelet[25260]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:52:22 functional-763073 kubelet[25260]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:52:22 functional-763073 kubelet[25260]: E1216 04:52:22.366240   25260 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:52:22 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:52:22 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:52:23 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 16 04:52:23 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:52:23 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:52:23 functional-763073 kubelet[25265]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:52:23 functional-763073 kubelet[25265]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:52:23 functional-763073 kubelet[25265]: E1216 04:52:23.108943   25265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:52:23 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:52:23 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:52:23 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1294.
	Dec 16 04:52:23 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:52:23 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:52:23 functional-763073 kubelet[25301]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:52:23 functional-763073 kubelet[25301]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:52:23 functional-763073 kubelet[25301]: E1216 04:52:23.891966   25301 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:52:23 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:52:23 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (303.83328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-763073 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-763073 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (63.217029ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-763073 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-763073
helpers_test.go:244: (dbg) docker inspect functional-763073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	        "Created": "2025-12-16T04:21:18.574151143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:21:18.645251496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/hosts",
	        "LogPath": "/var/lib/docker/containers/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a/d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a-json.log",
	        "Name": "/functional-763073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-763073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-763073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1100f8b4e1e1209359509a9b1053cf43f3beaf1a77fed92a5b50544979e996a",
	                "LowerDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94f7743f9c055dde7649074b5c8fd6d78ee7aa53d25db5cd529249d5b628a60b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-763073",
	                "Source": "/var/lib/docker/volumes/functional-763073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-763073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-763073",
	                "name.minikube.sigs.k8s.io": "functional-763073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c437a385e9a65ffb8203039a8abf0c3a15f10ed124c53eea18f471bc7c9b91",
	            "SandboxKey": "/var/run/docker/netns/93c437a385e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-763073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:21:e4:6c:21:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b73c07dab0b9d23e11f9d7ef326d4e1c281e1b7d8fb4df6e84eb9853a1392944",
	                    "EndpointID": "6235f13dd3635409d90a8c20bfef6e60eb4ca8efdc9a0efdfd4a1f2646d87e23",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-763073",
	                        "d1100f8b4e1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-763073 -n functional-763073: exit status 2 (309.275641ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount3 --alsologtostderr -v=1                      │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount1                                                                                                                  │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh findmnt -T /mount1                                                                                                                  │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh findmnt -T /mount2                                                                                                                  │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh findmnt -T /mount3                                                                                                                  │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ mount     │ -p functional-763073 --kill=true                                                                                                                          │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ start     │ -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ start     │ -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ start     │ -p functional-763073 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-763073 --alsologtostderr -v=1                                                                                            │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ ssh       │ functional-763073 ssh sudo systemctl is-active docker                                                                                                     │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ ssh       │ functional-763073 ssh sudo systemctl is-active containerd                                                                                                 │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │                     │
	│ image     │ functional-763073 image load --daemon kicbase/echo-server:functional-763073 --alsologtostderr                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image load --daemon kicbase/echo-server:functional-763073 --alsologtostderr                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image load --daemon kicbase/echo-server:functional-763073 --alsologtostderr                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image save kicbase/echo-server:functional-763073 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image rm kicbase/echo-server:functional-763073 --alsologtostderr                                                                        │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image ls                                                                                                                                │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	│ image     │ functional-763073 image save --daemon kicbase/echo-server:functional-763073 --alsologtostderr                                                             │ functional-763073 │ jenkins │ v1.37.0 │ 16 Dec 25 04:50 UTC │ 16 Dec 25 04:50 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:50:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:50:19.275087  498832 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:50:19.275230  498832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.275256  498832 out.go:374] Setting ErrFile to fd 2...
	I1216 04:50:19.275275  498832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.275561  498832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:50:19.275974  498832 out.go:368] Setting JSON to false
	I1216 04:50:19.276868  498832 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12766,"bootTime":1765847854,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:50:19.276969  498832 start.go:143] virtualization:  
	I1216 04:50:19.280401  498832 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:50:19.283297  498832 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:50:19.283468  498832 notify.go:221] Checking for updates...
	I1216 04:50:19.289162  498832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:50:19.292148  498832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:50:19.295134  498832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:50:19.297944  498832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:50:19.300988  498832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:50:19.304379  498832 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:50:19.305004  498832 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:50:19.352771  498832 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:50:19.352964  498832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.426222  498832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.416940994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.426334  498832 docker.go:319] overlay module found
	I1216 04:50:19.429570  498832 out.go:179] * Using the docker driver based on existing profile
	I1216 04:50:19.432360  498832 start.go:309] selected driver: docker
	I1216 04:50:19.432375  498832 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.432474  498832 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:50:19.432581  498832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.497183  498832 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.487722858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.497651  498832 cni.go:84] Creating CNI manager for ""
	I1216 04:50:19.497714  498832 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:50:19.497750  498832 start.go:353] cluster config:
	{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.500776  498832 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.365457664Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=66aba16f-2286-4957-9589-3f6b308f0653 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366373784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a0b09546-fe1b-440e-8076-598a1e2930d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.366892723Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=976ba277-fbb2-4db1-8ee0-ce87f329b2fa name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367464412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=15d708f7-0c1f-4e61-bde7-afc75b1dc430 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.367935941Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2d28f296-8f48-4bb2-bf27-13281f9a3b27 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368429435Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=82541142-23b6-4f48-816e-5b740356cd35 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:44:12 functional-763073 crio[9970]: time="2025-12-16T04:44:12.368875848Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=29b0dee6-8ec8-4ecc-822d-bf19bcc0e034 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.971042577Z" level=info msg="Checking image status: kicbase/echo-server:functional-763073" id=70a7356a-5558-4459-a73a-e89eebe7e1b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.97127796Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.971332812Z" level=info msg="Image kicbase/echo-server:functional-763073 not found" id=70a7356a-5558-4459-a73a-e89eebe7e1b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.97140999Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-763073 found" id=70a7356a-5558-4459-a73a-e89eebe7e1b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.996458455Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-763073" id=976812f7-10f3-4e01-8ec1-eb125d206dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.996617455Z" level=info msg="Image docker.io/kicbase/echo-server:functional-763073 not found" id=976812f7-10f3-4e01-8ec1-eb125d206dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:22 functional-763073 crio[9970]: time="2025-12-16T04:50:22.996672791Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-763073 found" id=976812f7-10f3-4e01-8ec1-eb125d206dbf name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:23 functional-763073 crio[9970]: time="2025-12-16T04:50:23.022513879Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-763073" id=c79ed0cd-703d-4525-bbed-3275903a7793 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:23 functional-763073 crio[9970]: time="2025-12-16T04:50:23.022687666Z" level=info msg="Image localhost/kicbase/echo-server:functional-763073 not found" id=c79ed0cd-703d-4525-bbed-3275903a7793 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:23 functional-763073 crio[9970]: time="2025-12-16T04:50:23.02273391Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-763073 found" id=c79ed0cd-703d-4525-bbed-3275903a7793 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050577763Z" level=info msg="Checking image status: kicbase/echo-server:functional-763073" id=bb3f5e13-09e3-439c-b578-1ad3b1c28a1d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050780424Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050838336Z" level=info msg="Image kicbase/echo-server:functional-763073 not found" id=bb3f5e13-09e3-439c-b578-1ad3b1c28a1d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.050927732Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-763073 found" id=bb3f5e13-09e3-439c-b578-1ad3b1c28a1d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.089986356Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-763073" id=e89fb5a5-1d43-4cc6-8ef4-b40b006e0259 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.090160963Z" level=info msg="Image docker.io/kicbase/echo-server:functional-763073 not found" id=e89fb5a5-1d43-4cc6-8ef4-b40b006e0259 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.090205345Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-763073 found" id=e89fb5a5-1d43-4cc6-8ef4-b40b006e0259 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 04:50:26 functional-763073 crio[9970]: time="2025-12-16T04:50:26.12007273Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-763073" id=cb604810-945f-4549-afc7-3a61db5db53a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:50:28.628025   23945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:28.629810   23945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:28.630807   23945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:28.632430   23945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:50:28.632859   23945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 01:17] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034430] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.741276] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.329373] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 03:00] hrtimer: interrupt took 10796797 ns
	[Dec16 04:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec16 04:11] overlayfs: idmapped layers are currently not supported
	[  +0.083578] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec16 04:17] overlayfs: idmapped layers are currently not supported
	[Dec16 04:18] overlayfs: idmapped layers are currently not supported
	[Dec16 04:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:50:28 up  3:32,  0 user,  load average: 1.46, 0.57, 0.55
	Linux functional-763073 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:50:26 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:26 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 16 04:50:26 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:26 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:26 functional-763073 kubelet[23782]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:26 functional-763073 kubelet[23782]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:26 functional-763073 kubelet[23782]: E1216 04:50:26.874872   23782 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:26 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:26 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:27 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 16 04:50:27 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:27 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:27 functional-763073 kubelet[23832]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:27 functional-763073 kubelet[23832]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:27 functional-763073 kubelet[23832]: E1216 04:50:27.584463   23832 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:27 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:27 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:50:28 functional-763073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 16 04:50:28 functional-763073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:28 functional-763073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:50:28 functional-763073 kubelet[23869]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:28 functional-763073 kubelet[23869]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 04:50:28 functional-763073 kubelet[23869]: E1216 04:50:28.384345   23869 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:50:28 functional-763073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:50:28 functional-763073 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-763073 -n functional-763073: exit status 2 (337.678912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-763073" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr]
E1216 04:48:22.217291  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1216 04:48:22.168175  494609 out.go:360] Setting OutFile to fd 1 ...
I1216 04:48:22.169387  494609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:48:22.169407  494609 out.go:374] Setting ErrFile to fd 2...
I1216 04:48:22.169416  494609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:48:22.169723  494609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:48:22.170060  494609 mustload.go:66] Loading cluster: functional-763073
I1216 04:48:22.170501  494609 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:48:22.171028  494609 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:48:22.197679  494609 host.go:66] Checking if "functional-763073" exists ...
I1216 04:48:22.198020  494609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 04:48:22.299997  494609 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:48:22.290160788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 04:48:22.300122  494609 api_server.go:166] Checking apiserver status ...
I1216 04:48:22.300180  494609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 04:48:22.300238  494609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:48:22.352163  494609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
W1216 04:48:22.514142  494609 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1216 04:48:22.517844  494609 out.go:179] * The control-plane node functional-763073 apiserver is not running: (state=Stopped)
I1216 04:48:22.521322  494609 out.go:179]   To start a cluster, run: "minikube start -p functional-763073"

                                                
                                                
stdout: * The control-plane node functional-763073 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-763073"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 494608: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-763073 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-763073 apply -f testdata/testsvc.yaml: exit status 1 (77.107389ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-763073 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (99.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.103.214.36": Temporary Error: Get "http://10.103.214.36": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-763073 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-763073 get svc nginx-svc: exit status 1 (60.692761ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-763073 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (99.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-763073 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-763073 create deployment hello-node --image kicbase/echo-server: exit status 1 (53.214376ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-763073 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 service list: exit status 103 (262.593258ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-763073 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-763073"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-763073 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-763073 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-763073\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 service list -o json: exit status 103 (271.155427ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-763073 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-763073"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-763073 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 service --namespace=default --https --url hello-node: exit status 103 (262.036338ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-763073 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-763073"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-763073 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 service hello-node --url --format={{.IP}}: exit status 103 (258.331536ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-763073 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-763073"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-763073 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-763073 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-763073\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 service hello-node --url: exit status 103 (253.412948ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-763073 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-763073"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-763073 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-763073 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-763073"
functional_test.go:1579: failed to parse "* The control-plane node functional-763073 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-763073\"": parse "* The control-plane node functional-763073 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-763073\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765860609398103848" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765860609398103848" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765860609398103848" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001/test-1765860609398103848
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.284706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:50:09.752677  441727 retry.go:31] will retry after 618.339179ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 04:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 04:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 04:50 test-1765860609398103848
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh cat /mount-9p/test-1765860609398103848
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-763073 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-763073 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (56.510122ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-763073 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (275.00021ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=40401)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 16 04:50 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 16 04:50 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 16 04:50 test-1765860609398103848
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-763073 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40401
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001:/mount-9p --alsologtostderr -v=1] stderr:
I1216 04:50:09.448710  496886 out.go:360] Setting OutFile to fd 1 ...
I1216 04:50:09.456393  496886 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:09.456444  496886 out.go:374] Setting ErrFile to fd 2...
I1216 04:50:09.456463  496886 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:09.456865  496886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:50:09.457345  496886 mustload.go:66] Loading cluster: functional-763073
I1216 04:50:09.458051  496886 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:09.458782  496886 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:50:09.487085  496886 host.go:66] Checking if "functional-763073" exists ...
I1216 04:50:09.487412  496886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 04:50:09.591774  496886 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:09.579428573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 04:50:09.591923  496886 cli_runner.go:164] Run: docker network inspect functional-763073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 04:50:09.637157  496886 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001 into VM as /mount-9p ...
I1216 04:50:09.640320  496886 out.go:179]   - Mount type:   9p
I1216 04:50:09.643226  496886 out.go:179]   - User ID:      docker
I1216 04:50:09.646494  496886 out.go:179]   - Group ID:     docker
I1216 04:50:09.649562  496886 out.go:179]   - Version:      9p2000.L
I1216 04:50:09.653439  496886 out.go:179]   - Message Size: 262144
I1216 04:50:09.656279  496886 out.go:179]   - Options:      map[]
I1216 04:50:09.659082  496886 out.go:179]   - Bind Address: 192.168.49.1:40401
I1216 04:50:09.661849  496886 out.go:179] * Userspace file server: 
I1216 04:50:09.662168  496886 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1216 04:50:09.662270  496886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:50:09.686749  496886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
I1216 04:50:09.783968  496886 mount.go:180] unmount for /mount-9p ran successfully
I1216 04:50:09.784004  496886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1216 04:50:09.792425  496886 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40401,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1216 04:50:09.802982  496886 main.go:127] stdlog: ufs.go:141 connected
I1216 04:50:09.803144  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tversion tag 65535 msize 262144 version '9P2000.L'
I1216 04:50:09.803196  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rversion tag 65535 msize 262144 version '9P2000'
I1216 04:50:09.803408  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1216 04:50:09.803466  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rattach tag 0 aqid (ed6f04 257e8d71 'd')
I1216 04:50:09.804093  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 0
I1216 04:50:09.804147  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f04 257e8d71 'd') m d775 at 0 mt 1765860609 l 4096 t 0 d 0 ext )
I1216 04:50:09.808074  496886 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/.mount-process: {Name:mkc460ddbedff4808982b2d1205bc79387a3e349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:50:09.808265  496886 mount.go:105] mount successful: ""
I1216 04:50:09.811776  496886 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3152887719/001 to /mount-9p
I1216 04:50:09.814656  496886 out.go:203] 
I1216 04:50:09.817510  496886 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1216 04:50:10.901878  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 0
I1216 04:50:10.901962  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f04 257e8d71 'd') m d775 at 0 mt 1765860609 l 4096 t 0 d 0 ext )
I1216 04:50:10.902393  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 1 
I1216 04:50:10.902451  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 
I1216 04:50:10.902641  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Topen tag 0 fid 1 mode 0
I1216 04:50:10.902696  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Ropen tag 0 qid (ed6f04 257e8d71 'd') iounit 0
I1216 04:50:10.902838  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 0
I1216 04:50:10.902876  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f04 257e8d71 'd') m d775 at 0 mt 1765860609 l 4096 t 0 d 0 ext )
I1216 04:50:10.903038  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 0 count 262120
I1216 04:50:10.903162  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 258
I1216 04:50:10.903295  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 258 count 261862
I1216 04:50:10.903329  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:10.903461  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 258 count 262120
I1216 04:50:10.903495  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:10.903622  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1216 04:50:10.903658  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 (ed6f05 257e8d71 '') 
I1216 04:50:10.903784  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:10.903828  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f05 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:10.903986  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:10.904016  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f05 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:10.904157  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 2
I1216 04:50:10.904186  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:10.904313  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 2 0:'test-1765860609398103848' 
I1216 04:50:10.904351  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 (ed6f07 257e8d71 '') 
I1216 04:50:10.904479  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:10.904528  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('test-1765860609398103848' 'jenkins' 'jenkins' '' q (ed6f07 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:10.904649  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:10.904680  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('test-1765860609398103848' 'jenkins' 'jenkins' '' q (ed6f07 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:10.904806  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 2
I1216 04:50:10.904826  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:10.904951  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1216 04:50:10.904991  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 (ed6f06 257e8d71 '') 
I1216 04:50:10.905140  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:10.905176  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f06 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:10.905343  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:10.905403  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f06 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:10.905539  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 2
I1216 04:50:10.905566  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:10.905688  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 258 count 262120
I1216 04:50:10.905715  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:10.905861  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 1
I1216 04:50:10.905890  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.197274  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 1 0:'test-1765860609398103848' 
I1216 04:50:11.197345  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 (ed6f07 257e8d71 '') 
I1216 04:50:11.197535  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 1
I1216 04:50:11.197578  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('test-1765860609398103848' 'jenkins' 'jenkins' '' q (ed6f07 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.197717  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 1 newfid 2 
I1216 04:50:11.197750  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 
I1216 04:50:11.197882  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Topen tag 0 fid 2 mode 0
I1216 04:50:11.197988  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Ropen tag 0 qid (ed6f07 257e8d71 '') iounit 0
I1216 04:50:11.198128  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 1
I1216 04:50:11.198169  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('test-1765860609398103848' 'jenkins' 'jenkins' '' q (ed6f07 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.198321  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 2 offset 0 count 262120
I1216 04:50:11.198396  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 24
I1216 04:50:11.198524  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 2 offset 24 count 262120
I1216 04:50:11.198555  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:11.198707  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 2 offset 24 count 262120
I1216 04:50:11.198760  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:11.198961  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 2
I1216 04:50:11.199011  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.199195  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 1
I1216 04:50:11.199223  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.532303  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 0
I1216 04:50:11.532417  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f04 257e8d71 'd') m d775 at 0 mt 1765860609 l 4096 t 0 d 0 ext )
I1216 04:50:11.532760  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 1 
I1216 04:50:11.532804  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 
I1216 04:50:11.532971  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Topen tag 0 fid 1 mode 0
I1216 04:50:11.533024  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Ropen tag 0 qid (ed6f04 257e8d71 'd') iounit 0
I1216 04:50:11.533177  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 0
I1216 04:50:11.533212  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f04 257e8d71 'd') m d775 at 0 mt 1765860609 l 4096 t 0 d 0 ext )
I1216 04:50:11.533373  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 0 count 262120
I1216 04:50:11.533465  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 258
I1216 04:50:11.533608  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 258 count 261862
I1216 04:50:11.533632  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:11.533866  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 258 count 262120
I1216 04:50:11.533891  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:11.534026  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1216 04:50:11.534059  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 (ed6f05 257e8d71 '') 
I1216 04:50:11.534193  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:11.534227  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f05 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.534344  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:11.534379  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f05 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.534536  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 2
I1216 04:50:11.534561  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.534683  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 2 0:'test-1765860609398103848' 
I1216 04:50:11.534714  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 (ed6f07 257e8d71 '') 
I1216 04:50:11.534837  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:11.534871  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('test-1765860609398103848' 'jenkins' 'jenkins' '' q (ed6f07 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.534986  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:11.535021  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('test-1765860609398103848' 'jenkins' 'jenkins' '' q (ed6f07 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.535143  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 2
I1216 04:50:11.535166  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.535296  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1216 04:50:11.535330  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rwalk tag 0 (ed6f06 257e8d71 '') 
I1216 04:50:11.535467  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:11.535540  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f06 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.535676  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tstat tag 0 fid 2
I1216 04:50:11.535727  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f06 257e8d71 '') m 644 at 0 mt 1765860609 l 24 t 0 d 0 ext )
I1216 04:50:11.535862  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 2
I1216 04:50:11.535886  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.536012  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tread tag 0 fid 1 offset 258 count 262120
I1216 04:50:11.536040  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rread tag 0 count 0
I1216 04:50:11.536291  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 1
I1216 04:50:11.536325  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.537701  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1216 04:50:11.537775  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rerror tag 0 ename 'file not found' ecode 0
I1216 04:50:11.797122  496886 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54988 Tclunk tag 0 fid 0
I1216 04:50:11.797184  496886 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54988 Rclunk tag 0
I1216 04:50:11.801712  496886 main.go:127] stdlog: ufs.go:147 disconnected
I1216 04:50:11.835443  496886 out.go:179] * Unmounting /mount-9p ...
I1216 04:50:11.838375  496886 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1216 04:50:11.847477  496886 mount.go:180] unmount for /mount-9p ran successfully
I1216 04:50:11.847586  496886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/.mount-process: {Name:mkc460ddbedff4808982b2d1205bc79387a3e349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 04:50:11.850810  496886 out.go:203] 
W1216 04:50:11.853802  496886 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1216 04:50:11.856736  496886 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-212065 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-212065 --output=json --user=testUser: exit status 80 (1.742453409s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b1d138f1-1829-44fc-864c-e335b67fcf92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-212065 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b5544f2b-6d94-465a-aacb-b84dd9febd08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T05:06:46Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"004346eb-9fae-4a00-8016-21e9612350de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-212065 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-212065 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-212065 --output=json --user=testUser: exit status 80 (1.878379976s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6e955b2a-95d9-4c4c-8288-9b8876ca506d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-212065 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e32da072-9fc1-4658-8bab-81a3a2831a7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-16T05:06:48Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"1b6a13ef-ddcd-461d-9a8a-d8dab288dc06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-212065 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (785.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-913873 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 05:25:24.307883  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-913873 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.709475208s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-913873
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-913873: (1.330870074s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-913873 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-913873 status --format={{.Host}}: exit status 7 (76.632191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-913873 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 05:26:25.717625  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-913873 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m20.005169748s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-913873] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-913873" primary control-plane node in "kubernetes-upgrade-913873" cluster
	* Pulling base image v0.0.48-1765575274-22117 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:25:48.072402  620659 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:25:48.072634  620659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:25:48.072664  620659 out.go:374] Setting ErrFile to fd 2...
	I1216 05:25:48.072683  620659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:25:48.073151  620659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:25:48.073708  620659 out.go:368] Setting JSON to false
	I1216 05:25:48.074693  620659 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14894,"bootTime":1765847854,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 05:25:48.074838  620659 start.go:143] virtualization:  
	I1216 05:25:48.078658  620659 out.go:179] * [kubernetes-upgrade-913873] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 05:25:48.082464  620659 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 05:25:48.082664  620659 notify.go:221] Checking for updates...
	I1216 05:25:48.086506  620659 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:25:48.089406  620659 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 05:25:48.092372  620659 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 05:25:48.095353  620659 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 05:25:48.098239  620659 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:25:48.101623  620659 config.go:182] Loaded profile config "kubernetes-upgrade-913873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1216 05:25:48.102253  620659 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:25:48.125397  620659 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 05:25:48.125533  620659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:25:48.186521  620659 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 05:25:48.176151115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:25:48.186636  620659 docker.go:319] overlay module found
	I1216 05:25:48.189799  620659 out.go:179] * Using the docker driver based on existing profile
	I1216 05:25:48.192793  620659 start.go:309] selected driver: docker
	I1216 05:25:48.192824  620659 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-913873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-913873 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:25:48.192939  620659 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:25:48.193752  620659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:25:48.245798  620659 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 05:25:48.237298298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:25:48.246124  620659 cni.go:84] Creating CNI manager for ""
	I1216 05:25:48.246188  620659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:25:48.246240  620659 start.go:353] cluster config:
	{Name:kubernetes-upgrade-913873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-913873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:25:48.249380  620659 out.go:179] * Starting "kubernetes-upgrade-913873" primary control-plane node in "kubernetes-upgrade-913873" cluster
	I1216 05:25:48.252170  620659 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:25:48.255072  620659 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 05:25:48.258093  620659 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:25:48.258145  620659 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1216 05:25:48.258157  620659 cache.go:65] Caching tarball of preloaded images
	I1216 05:25:48.258170  620659 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 05:25:48.258267  620659 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 05:25:48.258277  620659 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 05:25:48.258378  620659 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/config.json ...
	I1216 05:25:48.278294  620659 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 05:25:48.278314  620659 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 05:25:48.278337  620659 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:25:48.278369  620659 start.go:360] acquireMachinesLock for kubernetes-upgrade-913873: {Name:mk577ef0be189d57aabe252e7de59a19e4c67836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:25:48.278430  620659 start.go:364] duration metric: took 44.85µs to acquireMachinesLock for "kubernetes-upgrade-913873"
	I1216 05:25:48.278451  620659 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:25:48.278457  620659 fix.go:54] fixHost starting: 
	I1216 05:25:48.278706  620659 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-913873 --format={{.State.Status}}
	I1216 05:25:48.295653  620659 fix.go:112] recreateIfNeeded on kubernetes-upgrade-913873: state=Stopped err=<nil>
	W1216 05:25:48.295681  620659 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:25:48.298872  620659 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-913873" ...
	I1216 05:25:48.298956  620659 cli_runner.go:164] Run: docker start kubernetes-upgrade-913873
	I1216 05:25:48.552391  620659 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-913873 --format={{.State.Status}}
	I1216 05:25:48.575778  620659 kic.go:430] container "kubernetes-upgrade-913873" state is running.
	I1216 05:25:48.576166  620659 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-913873
	I1216 05:25:48.597176  620659 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/config.json ...
	I1216 05:25:48.597403  620659 machine.go:94] provisionDockerMachine start ...
	I1216 05:25:48.597468  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:48.616374  620659 main.go:143] libmachine: Using SSH client type: native
	I1216 05:25:48.616699  620659 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1216 05:25:48.616714  620659 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:25:48.617416  620659 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 05:25:51.756614  620659 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-913873
	
	I1216 05:25:51.756691  620659 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-913873"
	I1216 05:25:51.756820  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:51.778724  620659 main.go:143] libmachine: Using SSH client type: native
	I1216 05:25:51.779108  620659 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1216 05:25:51.779126  620659 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-913873 && echo "kubernetes-upgrade-913873" | sudo tee /etc/hostname
	I1216 05:25:51.923127  620659 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-913873
	
	I1216 05:25:51.923267  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:51.942087  620659 main.go:143] libmachine: Using SSH client type: native
	I1216 05:25:51.942414  620659 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1216 05:25:51.942437  620659 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-913873' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-913873/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-913873' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:25:52.081624  620659 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:25:52.081650  620659 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-438353/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-438353/.minikube}
	I1216 05:25:52.081672  620659 ubuntu.go:190] setting up certificates
	I1216 05:25:52.081693  620659 provision.go:84] configureAuth start
	I1216 05:25:52.081761  620659 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-913873
	I1216 05:25:52.100090  620659 provision.go:143] copyHostCerts
	I1216 05:25:52.100172  620659 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem, removing ...
	I1216 05:25:52.100187  620659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem
	I1216 05:25:52.100263  620659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/ca.pem (1078 bytes)
	I1216 05:25:52.100380  620659 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem, removing ...
	I1216 05:25:52.100392  620659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem
	I1216 05:25:52.100421  620659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/cert.pem (1123 bytes)
	I1216 05:25:52.100488  620659 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem, removing ...
	I1216 05:25:52.100498  620659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem
	I1216 05:25:52.100524  620659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-438353/.minikube/key.pem (1679 bytes)
	I1216 05:25:52.100590  620659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-913873 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-913873 localhost minikube]
	I1216 05:25:52.284811  620659 provision.go:177] copyRemoteCerts
	I1216 05:25:52.284880  620659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:25:52.284931  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:52.305318  620659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/kubernetes-upgrade-913873/id_rsa Username:docker}
	I1216 05:25:52.404550  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 05:25:52.421848  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 05:25:52.440008  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:25:52.458217  620659 provision.go:87] duration metric: took 376.495738ms to configureAuth
	I1216 05:25:52.458244  620659 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:25:52.458436  620659 config.go:182] Loaded profile config "kubernetes-upgrade-913873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:25:52.458566  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:52.476700  620659 main.go:143] libmachine: Using SSH client type: native
	I1216 05:25:52.477021  620659 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1216 05:25:52.477048  620659 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:25:52.811685  620659 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:25:52.811710  620659 machine.go:97] duration metric: took 4.214293901s to provisionDockerMachine
	I1216 05:25:52.811722  620659 start.go:293] postStartSetup for "kubernetes-upgrade-913873" (driver="docker")
	I1216 05:25:52.811734  620659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:25:52.811794  620659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:25:52.811856  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:52.829345  620659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/kubernetes-upgrade-913873/id_rsa Username:docker}
	I1216 05:25:52.924892  620659 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:25:52.928252  620659 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:25:52.928282  620659 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:25:52.928293  620659 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 05:25:52.928349  620659 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 05:25:52.928464  620659 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 05:25:52.928626  620659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:25:52.936055  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 05:25:52.953450  620659 start.go:296] duration metric: took 141.712207ms for postStartSetup
	I1216 05:25:52.953623  620659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:25:52.953671  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:52.970892  620659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/kubernetes-upgrade-913873/id_rsa Username:docker}
	I1216 05:25:53.066217  620659 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:25:53.070926  620659 fix.go:56] duration metric: took 4.792462432s for fixHost
	I1216 05:25:53.070953  620659 start.go:83] releasing machines lock for "kubernetes-upgrade-913873", held for 4.792513666s
	I1216 05:25:53.071024  620659 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-913873
	I1216 05:25:53.087952  620659 ssh_runner.go:195] Run: cat /version.json
	I1216 05:25:53.088050  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:53.088333  620659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:25:53.088391  620659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-913873
	I1216 05:25:53.107659  620659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/kubernetes-upgrade-913873/id_rsa Username:docker}
	I1216 05:25:53.110805  620659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/kubernetes-upgrade-913873/id_rsa Username:docker}
	I1216 05:25:53.201398  620659 ssh_runner.go:195] Run: systemctl --version
	I1216 05:25:53.291773  620659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:25:53.329108  620659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:25:53.333603  620659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:25:53.333706  620659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:25:53.341628  620659 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:25:53.341651  620659 start.go:496] detecting cgroup driver to use...
	I1216 05:25:53.341701  620659 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:25:53.341763  620659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:25:53.357431  620659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:25:53.370923  620659 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:25:53.370986  620659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:25:53.387193  620659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:25:53.401042  620659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:25:53.515427  620659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:25:53.633037  620659 docker.go:234] disabling docker service ...
	I1216 05:25:53.633165  620659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:25:53.648275  620659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:25:53.661543  620659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:25:53.778276  620659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:25:53.896040  620659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:25:53.909392  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:25:53.924438  620659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:25:53.924523  620659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:25:53.934261  620659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 05:25:53.934334  620659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:25:53.943665  620659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:25:53.952983  620659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:25:53.962642  620659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:25:53.971457  620659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:25:53.980787  620659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:25:53.989668  620659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:25:53.998954  620659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:25:54.008487  620659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:25:54.019333  620659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:25:54.137051  620659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:25:54.319502  620659 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:25:54.319597  620659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:25:54.323513  620659 start.go:564] Will wait 60s for crictl version
	I1216 05:25:54.323599  620659 ssh_runner.go:195] Run: which crictl
	I1216 05:25:54.327377  620659 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:25:54.352378  620659 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:25:54.352497  620659 ssh_runner.go:195] Run: crio --version
	I1216 05:25:54.385489  620659 ssh_runner.go:195] Run: crio --version
	I1216 05:25:54.418451  620659 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1216 05:25:54.421280  620659 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-913873 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:25:54.437714  620659 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:25:54.441547  620659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:25:54.451636  620659 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-913873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-913873 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:25:54.451753  620659 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 05:25:54.451814  620659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:25:54.485560  620659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1216 05:25:54.485649  620659 ssh_runner.go:195] Run: which lz4
	I1216 05:25:54.489398  620659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 05:25:54.493006  620659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 05:25:54.493051  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1216 05:25:56.132669  620659 crio.go:462] duration metric: took 1.643321506s to copy over tarball
	I1216 05:25:56.132819  620659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 05:25:58.030773  620659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.897892217s)
	I1216 05:25:58.030803  620659 crio.go:469] duration metric: took 1.898066077s to extract the tarball
	I1216 05:25:58.030810  620659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 05:25:58.104186  620659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:25:58.140728  620659 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:25:58.140752  620659 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:25:58.140761  620659 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1216 05:25:58.140862  620659 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-913873 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-913873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:25:58.140947  620659 ssh_runner.go:195] Run: crio config
	I1216 05:25:58.205632  620659 cni.go:84] Creating CNI manager for ""
	I1216 05:25:58.205700  620659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:25:58.205732  620659 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:25:58.205758  620659 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-913873 NodeName:kubernetes-upgrade-913873 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:25:58.205897  620659 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-913873"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:25:58.205980  620659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:25:58.213513  620659 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:25:58.213603  620659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:25:58.222022  620659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1216 05:25:58.234616  620659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:25:58.247240  620659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1216 05:25:58.259673  620659 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:25:58.264896  620659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:25:58.274741  620659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:25:58.386712  620659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:25:58.402244  620659 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873 for IP: 192.168.76.2
	I1216 05:25:58.402276  620659 certs.go:195] generating shared ca certs ...
	I1216 05:25:58.402292  620659 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:25:58.402482  620659 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 05:25:58.402571  620659 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 05:25:58.402586  620659 certs.go:257] generating profile certs ...
	I1216 05:25:58.402720  620659 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/client.key
	I1216 05:25:58.402818  620659 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/apiserver.key.33089e04
	I1216 05:25:58.402909  620659 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/proxy-client.key
	I1216 05:25:58.403055  620659 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 05:25:58.403118  620659 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 05:25:58.403132  620659 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:25:58.403176  620659 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 05:25:58.403228  620659 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:25:58.403275  620659 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 05:25:58.403356  620659 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 05:25:58.404043  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:25:58.422147  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:25:58.439795  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:25:58.458687  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:25:58.477911  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 05:25:58.495460  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:25:58.512971  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:25:58.531085  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:25:58.549235  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:25:58.566963  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 05:25:58.584460  620659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 05:25:58.602329  620659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:25:58.614948  620659 ssh_runner.go:195] Run: openssl version
	I1216 05:25:58.621437  620659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:25:58.629115  620659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:25:58.636740  620659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:25:58.640378  620659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:25:58.640472  620659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:25:58.681458  620659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:25:58.688792  620659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 05:25:58.696362  620659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 05:25:58.703963  620659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 05:25:58.707602  620659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 05:25:58.707690  620659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 05:25:58.748771  620659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:25:58.756262  620659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 05:25:58.763575  620659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 05:25:58.771253  620659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 05:25:58.775238  620659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 05:25:58.775307  620659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 05:25:58.816363  620659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:25:58.823842  620659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:25:58.828571  620659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:25:58.869994  620659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:25:58.910961  620659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:25:58.960314  620659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:25:59.010677  620659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:25:59.059182  620659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:25:59.108997  620659 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-913873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-913873 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:25:59.109108  620659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:25:59.109176  620659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:25:59.136301  620659 cri.go:89] found id: ""
	I1216 05:25:59.136373  620659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:25:59.144150  620659 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:25:59.144170  620659 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:25:59.144239  620659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:25:59.151510  620659 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:25:59.152110  620659 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-913873" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 05:25:59.152366  620659 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-438353/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-913873" cluster setting kubeconfig missing "kubernetes-upgrade-913873" context setting]
	I1216 05:25:59.152815  620659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:25:59.153539  620659 kapi.go:59] client config for kubernetes-upgrade-913873: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/kubernetes-upgrade-913873/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:25:59.154085  620659 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 05:25:59.154107  620659 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 05:25:59.154113  620659 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 05:25:59.154117  620659 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 05:25:59.154121  620659 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 05:25:59.154408  620659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:25:59.163959  620659 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 05:25:22.763364999 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 05:25:58.255745980 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-913873"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1216 05:25:59.163982  620659 kubeadm.go:1161] stopping kube-system containers ...
	I1216 05:25:59.163994  620659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 05:25:59.164055  620659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:25:59.192276  620659 cri.go:89] found id: ""
	I1216 05:25:59.192345  620659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 05:25:59.208614  620659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:25:59.216456  620659 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 16 05:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 16 05:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 16 05:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 16 05:25 /etc/kubernetes/scheduler.conf
	
	I1216 05:25:59.216525  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:25:59.224482  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:25:59.232208  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:25:59.239784  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:25:59.239875  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:25:59.247431  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:25:59.254943  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:25:59.255008  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:25:59.262346  620659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:25:59.270549  620659 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:25:59.316928  620659 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:26:00.871254  620659 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.554286216s)
	I1216 05:26:00.871331  620659 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:26:01.070359  620659 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:26:01.128904  620659 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:26:01.176936  620659 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:26:01.177017  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:01.677174  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:02.177231  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:02.677274  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:03.177904  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:03.677594  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:04.177995  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:04.677852  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:05.178040  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:05.677203  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:06.177612  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:06.677205  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:07.177924  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:07.677156  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:08.177954  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:08.677966  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:09.177369  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:09.677690  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:10.177192  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:10.677212  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:11.177902  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:11.677215  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:12.177911  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:12.677082  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:13.177536  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:13.677665  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:14.177209  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:14.678087  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:15.177923  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:15.678021  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:16.177183  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:16.677189  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:17.178068  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:17.677204  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:18.178097  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:18.677172  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:19.177247  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:19.678074  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:20.178035  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:20.678150  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:21.177561  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:21.677174  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:22.177128  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:22.677187  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:23.177501  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:23.677188  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:24.177224  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:24.677598  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:25.177181  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:25.677230  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:26.177798  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:26.677752  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:27.178067  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:27.677278  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:28.177923  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:28.677242  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:29.177234  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:29.677939  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:30.177712  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:30.677734  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:31.178045  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:31.677177  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:32.177962  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:32.677160  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:33.178018  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:33.677271  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:34.177971  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:34.677919  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:35.177188  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:35.677095  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:36.177354  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:36.677801  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:37.177257  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:37.677659  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:38.178009  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:38.677505  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:39.177314  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:39.677237  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:40.177195  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:40.677908  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:41.177233  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:41.677889  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:42.177282  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:42.677666  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:43.178001  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:43.677732  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:44.177264  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:44.677218  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:45.178286  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:45.677801  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:46.177193  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:46.677771  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:47.177203  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:47.677699  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:48.178078  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:48.677147  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:49.177417  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:49.677742  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:50.177704  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:50.677142  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:51.177795  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:51.678065  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:52.178043  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:52.677404  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:53.178001  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:53.677931  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:54.177314  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:54.677398  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:55.177915  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:55.677774  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:56.177203  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:56.677177  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:57.177588  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:57.678090  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:58.177496  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:58.678062  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:59.177268  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:59.678014  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:00.178127  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:00.677857  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:01.177234  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:01.177378  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:01.220274  620659 cri.go:89] found id: ""
	I1216 05:27:01.220320  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.220329  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:01.220336  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:01.220441  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:01.346010  620659 cri.go:89] found id: ""
	I1216 05:27:01.346039  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.346048  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:01.346055  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:01.346113  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:01.421765  620659 cri.go:89] found id: ""
	I1216 05:27:01.421788  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.421797  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:01.421803  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:01.421863  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:01.508215  620659 cri.go:89] found id: ""
	I1216 05:27:01.508237  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.508246  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:01.508252  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:01.508309  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:01.548322  620659 cri.go:89] found id: ""
	I1216 05:27:01.548351  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.548360  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:01.548367  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:01.548427  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:01.599579  620659 cri.go:89] found id: ""
	I1216 05:27:01.599607  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.599616  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:01.599622  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:01.599694  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:01.668724  620659 cri.go:89] found id: ""
	I1216 05:27:01.668753  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.668763  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:01.668769  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:01.668832  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:01.704032  620659 cri.go:89] found id: ""
	I1216 05:27:01.704054  620659 logs.go:282] 0 containers: []
	W1216 05:27:01.704062  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:01.704071  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:01.704082  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:01.815366  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:01.815401  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:01.843180  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:01.843209  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:02.289823  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:02.289870  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:02.289887  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:02.359393  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:02.359477  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:04.925513  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:04.950822  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:04.950897  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:05.008824  620659 cri.go:89] found id: ""
	I1216 05:27:05.008852  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.008862  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:05.008869  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:05.008943  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:05.052298  620659 cri.go:89] found id: ""
	I1216 05:27:05.052320  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.052329  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:05.052335  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:05.052391  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:05.083081  620659 cri.go:89] found id: ""
	I1216 05:27:05.083102  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.083112  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:05.083118  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:05.083178  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:05.113895  620659 cri.go:89] found id: ""
	I1216 05:27:05.113919  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.113928  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:05.113933  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:05.114006  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:05.149727  620659 cri.go:89] found id: ""
	I1216 05:27:05.149811  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.149835  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:05.149854  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:05.149961  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:05.183842  620659 cri.go:89] found id: ""
	I1216 05:27:05.183864  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.183873  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:05.183879  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:05.183946  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:05.221649  620659 cri.go:89] found id: ""
	I1216 05:27:05.221673  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.221682  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:05.221689  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:05.221748  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:05.280048  620659 cri.go:89] found id: ""
	I1216 05:27:05.280083  620659 logs.go:282] 0 containers: []
	W1216 05:27:05.280092  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:05.280101  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:05.280114  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:05.297430  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:05.297464  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:05.472681  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:05.472754  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:05.472782  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:05.515813  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:05.515901  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:05.566748  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:05.566778  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:08.153437  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:08.165423  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:08.165495  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:08.201317  620659 cri.go:89] found id: ""
	I1216 05:27:08.201344  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.201353  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:08.201360  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:08.201420  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:08.234890  620659 cri.go:89] found id: ""
	I1216 05:27:08.234918  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.234927  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:08.234933  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:08.234998  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:08.261481  620659 cri.go:89] found id: ""
	I1216 05:27:08.261507  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.261518  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:08.261524  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:08.261593  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:08.300805  620659 cri.go:89] found id: ""
	I1216 05:27:08.300830  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.300838  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:08.300844  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:08.300906  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:08.350714  620659 cri.go:89] found id: ""
	I1216 05:27:08.350737  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.350746  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:08.350752  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:08.350817  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:08.388928  620659 cri.go:89] found id: ""
	I1216 05:27:08.388955  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.388964  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:08.388973  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:08.389032  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:08.426368  620659 cri.go:89] found id: ""
	I1216 05:27:08.426396  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.426404  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:08.426410  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:08.426471  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:08.452710  620659 cri.go:89] found id: ""
	I1216 05:27:08.452736  620659 logs.go:282] 0 containers: []
	W1216 05:27:08.452745  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:08.452753  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:08.452765  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:08.530274  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:08.530308  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:08.551685  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:08.551714  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:08.646221  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:08.646246  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:08.646258  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:08.677828  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:08.677863  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:11.222038  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:11.242103  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:11.242220  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:11.278432  620659 cri.go:89] found id: ""
	I1216 05:27:11.278461  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.278470  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:11.278477  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:11.278535  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:11.317810  620659 cri.go:89] found id: ""
	I1216 05:27:11.317838  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.317846  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:11.317854  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:11.317911  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:11.407630  620659 cri.go:89] found id: ""
	I1216 05:27:11.407651  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.407660  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:11.407665  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:11.407732  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:11.457730  620659 cri.go:89] found id: ""
	I1216 05:27:11.457757  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.457765  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:11.457771  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:11.457824  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:11.501033  620659 cri.go:89] found id: ""
	I1216 05:27:11.501055  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.501076  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:11.501084  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:11.501142  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:11.533227  620659 cri.go:89] found id: ""
	I1216 05:27:11.533249  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.533258  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:11.533263  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:11.533327  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:11.562246  620659 cri.go:89] found id: ""
	I1216 05:27:11.562271  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.562279  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:11.562285  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:11.562344  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:11.592973  620659 cri.go:89] found id: ""
	I1216 05:27:11.592995  620659 logs.go:282] 0 containers: []
	W1216 05:27:11.593004  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:11.593012  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:11.593024  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:11.672091  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:11.672156  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:11.692925  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:11.692955  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:11.787812  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:11.787835  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:11.787847  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:11.832624  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:11.832701  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:14.391495  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:14.402419  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:14.402494  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:14.449924  620659 cri.go:89] found id: ""
	I1216 05:27:14.449954  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.449963  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:14.449970  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:14.450035  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:14.507226  620659 cri.go:89] found id: ""
	I1216 05:27:14.507255  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.507263  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:14.507269  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:14.507334  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:14.541866  620659 cri.go:89] found id: ""
	I1216 05:27:14.541893  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.541901  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:14.541914  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:14.541976  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:14.577726  620659 cri.go:89] found id: ""
	I1216 05:27:14.577754  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.577763  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:14.577770  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:14.577830  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:14.619125  620659 cri.go:89] found id: ""
	I1216 05:27:14.619152  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.619161  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:14.619167  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:14.619225  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:14.672132  620659 cri.go:89] found id: ""
	I1216 05:27:14.672159  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.672168  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:14.672175  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:14.672237  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:14.716589  620659 cri.go:89] found id: ""
	I1216 05:27:14.716616  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.716624  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:14.716630  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:14.716694  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:14.758329  620659 cri.go:89] found id: ""
	I1216 05:27:14.758357  620659 logs.go:282] 0 containers: []
	W1216 05:27:14.758366  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:14.758374  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:14.758386  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:14.804842  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:14.804887  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:14.866676  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:14.866705  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:14.997859  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:14.997901  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:15.024064  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:15.024098  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:15.179102  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:17.679313  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:17.689162  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:17.689244  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:17.716161  620659 cri.go:89] found id: ""
	I1216 05:27:17.716188  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.716196  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:17.716203  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:17.716261  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:17.746382  620659 cri.go:89] found id: ""
	I1216 05:27:17.746408  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.746417  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:17.746423  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:17.746485  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:17.776235  620659 cri.go:89] found id: ""
	I1216 05:27:17.776261  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.776270  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:17.776276  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:17.776338  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:17.801709  620659 cri.go:89] found id: ""
	I1216 05:27:17.801735  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.801744  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:17.801750  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:17.801819  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:17.829948  620659 cri.go:89] found id: ""
	I1216 05:27:17.829974  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.829983  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:17.829989  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:17.830052  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:17.855940  620659 cri.go:89] found id: ""
	I1216 05:27:17.855966  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.855974  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:17.855980  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:17.856040  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:17.881532  620659 cri.go:89] found id: ""
	I1216 05:27:17.881559  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.881568  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:17.881574  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:17.881648  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:17.907054  620659 cri.go:89] found id: ""
	I1216 05:27:17.907077  620659 logs.go:282] 0 containers: []
	W1216 05:27:17.907086  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:17.907095  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:17.907108  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:17.923080  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:17.923109  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:17.988878  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:17.988949  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:17.988978  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:18.020169  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:18.020212  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:18.053146  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:18.053177  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:20.630957  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:20.641200  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:20.641280  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:20.666086  620659 cri.go:89] found id: ""
	I1216 05:27:20.666113  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.666122  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:20.666129  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:20.666192  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:20.693412  620659 cri.go:89] found id: ""
	I1216 05:27:20.693435  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.693444  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:20.693463  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:20.693524  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:20.722097  620659 cri.go:89] found id: ""
	I1216 05:27:20.722123  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.722131  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:20.722137  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:20.722198  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:20.746699  620659 cri.go:89] found id: ""
	I1216 05:27:20.746726  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.746735  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:20.746741  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:20.746809  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:20.773672  620659 cri.go:89] found id: ""
	I1216 05:27:20.773699  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.773708  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:20.773715  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:20.773805  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:20.798926  620659 cri.go:89] found id: ""
	I1216 05:27:20.798951  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.798961  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:20.798967  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:20.799023  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:20.831077  620659 cri.go:89] found id: ""
	I1216 05:27:20.831106  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.831116  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:20.831123  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:20.831182  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:20.856809  620659 cri.go:89] found id: ""
	I1216 05:27:20.856834  620659 logs.go:282] 0 containers: []
	W1216 05:27:20.856843  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:20.856852  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:20.856865  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:20.924446  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:20.924483  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:20.940559  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:20.940590  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:21.011945  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:21.011970  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:21.011995  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:21.043836  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:21.043869  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:23.582818  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:23.593630  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:23.593696  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:23.631332  620659 cri.go:89] found id: ""
	I1216 05:27:23.631354  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.631362  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:23.631368  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:23.631424  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:23.659501  620659 cri.go:89] found id: ""
	I1216 05:27:23.659523  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.659531  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:23.659537  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:23.659608  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:23.699359  620659 cri.go:89] found id: ""
	I1216 05:27:23.699440  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.699462  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:23.699479  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:23.699589  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:23.735539  620659 cri.go:89] found id: ""
	I1216 05:27:23.735562  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.735571  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:23.735577  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:23.735636  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:23.783418  620659 cri.go:89] found id: ""
	I1216 05:27:23.783440  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.783448  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:23.783454  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:23.783513  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:23.819522  620659 cri.go:89] found id: ""
	I1216 05:27:23.819545  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.819553  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:23.819559  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:23.819620  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:23.855932  620659 cri.go:89] found id: ""
	I1216 05:27:23.856018  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.856043  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:23.856061  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:23.856177  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:23.888511  620659 cri.go:89] found id: ""
	I1216 05:27:23.888533  620659 logs.go:282] 0 containers: []
	W1216 05:27:23.888542  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:23.888551  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:23.888563  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:23.967005  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:23.967086  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:23.982992  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:23.983024  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:24.119590  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:24.119657  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:24.119687  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:24.159918  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:24.159952  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:26.697167  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:26.709722  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:26.709790  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:26.748449  620659 cri.go:89] found id: ""
	I1216 05:27:26.748481  620659 logs.go:282] 0 containers: []
	W1216 05:27:26.748490  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:26.748498  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:26.748561  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:26.779782  620659 cri.go:89] found id: ""
	I1216 05:27:26.779809  620659 logs.go:282] 0 containers: []
	W1216 05:27:26.779818  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:26.779827  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:26.779885  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:26.813680  620659 cri.go:89] found id: ""
	I1216 05:27:26.813707  620659 logs.go:282] 0 containers: []
	W1216 05:27:26.813716  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:26.813722  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:26.813782  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:26.853784  620659 cri.go:89] found id: ""
	I1216 05:27:26.853805  620659 logs.go:282] 0 containers: []
	W1216 05:27:26.853813  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:26.853819  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:26.853878  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:26.883615  620659 cri.go:89] found id: ""
	I1216 05:27:26.883640  620659 logs.go:282] 0 containers: []
	W1216 05:27:26.883704  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:26.883714  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:26.883809  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:26.934814  620659 cri.go:89] found id: ""
	I1216 05:27:26.934845  620659 logs.go:282] 0 containers: []
	W1216 05:27:26.934855  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:26.934867  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:26.934939  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:26.966268  620659 cri.go:89] found id: ""
	I1216 05:27:26.966306  620659 logs.go:282] 0 containers: []
	W1216 05:27:26.966332  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:26.966340  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:26.966416  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:27.002476  620659 cri.go:89] found id: ""
	I1216 05:27:27.002544  620659 logs.go:282] 0 containers: []
	W1216 05:27:27.002557  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:27.002568  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:27.002588  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:27.089291  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:27.089329  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:27.112195  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:27.112226  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:27.214864  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:27.214898  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:27.214910  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:27.264660  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:27.264697  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:29.809570  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:29.826584  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:29.826660  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:29.870426  620659 cri.go:89] found id: ""
	I1216 05:27:29.870456  620659 logs.go:282] 0 containers: []
	W1216 05:27:29.870466  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:29.870472  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:29.870535  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:29.915498  620659 cri.go:89] found id: ""
	I1216 05:27:29.915527  620659 logs.go:282] 0 containers: []
	W1216 05:27:29.915536  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:29.915542  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:29.915609  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:29.959325  620659 cri.go:89] found id: ""
	I1216 05:27:29.959353  620659 logs.go:282] 0 containers: []
	W1216 05:27:29.959363  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:29.959368  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:29.959426  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:29.998906  620659 cri.go:89] found id: ""
	I1216 05:27:29.998936  620659 logs.go:282] 0 containers: []
	W1216 05:27:29.998946  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:29.998952  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:29.999021  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:30.038127  620659 cri.go:89] found id: ""
	I1216 05:27:30.038154  620659 logs.go:282] 0 containers: []
	W1216 05:27:30.038164  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:30.038170  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:30.038233  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:30.074238  620659 cri.go:89] found id: ""
	I1216 05:27:30.074332  620659 logs.go:282] 0 containers: []
	W1216 05:27:30.074359  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:30.074368  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:30.074494  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:30.104130  620659 cri.go:89] found id: ""
	I1216 05:27:30.104160  620659 logs.go:282] 0 containers: []
	W1216 05:27:30.104170  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:30.104176  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:30.104243  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:30.134436  620659 cri.go:89] found id: ""
	I1216 05:27:30.134463  620659 logs.go:282] 0 containers: []
	W1216 05:27:30.134472  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:30.134481  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:30.134493  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:30.170177  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:30.170215  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:30.217538  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:30.217568  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:30.292535  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:30.292578  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:30.309470  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:30.309499  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:30.463315  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:32.964145  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:32.976249  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:32.976322  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:33.009810  620659 cri.go:89] found id: ""
	I1216 05:27:33.009854  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.009864  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:33.009871  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:33.009949  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:33.071614  620659 cri.go:89] found id: ""
	I1216 05:27:33.071637  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.071646  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:33.071652  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:33.071712  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:33.128242  620659 cri.go:89] found id: ""
	I1216 05:27:33.128268  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.128293  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:33.128302  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:33.128392  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:33.201405  620659 cri.go:89] found id: ""
	I1216 05:27:33.201435  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.201445  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:33.201451  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:33.201517  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:33.251383  620659 cri.go:89] found id: ""
	I1216 05:27:33.251410  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.251421  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:33.251427  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:33.251496  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:33.302602  620659 cri.go:89] found id: ""
	I1216 05:27:33.302626  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.302634  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:33.302641  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:33.302715  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:33.392432  620659 cri.go:89] found id: ""
	I1216 05:27:33.392453  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.392461  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:33.392468  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:33.392524  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:33.450631  620659 cri.go:89] found id: ""
	I1216 05:27:33.450652  620659 logs.go:282] 0 containers: []
	W1216 05:27:33.450661  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:33.450669  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:33.450682  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:33.479737  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:33.479769  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:33.587180  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:33.587206  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:33.587219  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:33.637809  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:33.637851  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:33.691076  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:33.691117  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:36.322172  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:36.343919  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:36.343986  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:36.381530  620659 cri.go:89] found id: ""
	I1216 05:27:36.381552  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.381560  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:36.381566  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:36.381634  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:36.415954  620659 cri.go:89] found id: ""
	I1216 05:27:36.415976  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.415983  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:36.415989  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:36.416044  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:36.452399  620659 cri.go:89] found id: ""
	I1216 05:27:36.452421  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.452430  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:36.452436  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:36.452492  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:36.484902  620659 cri.go:89] found id: ""
	I1216 05:27:36.484975  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.484998  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:36.485017  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:36.485114  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:36.517712  620659 cri.go:89] found id: ""
	I1216 05:27:36.517736  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.517744  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:36.517750  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:36.517810  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:36.558906  620659 cri.go:89] found id: ""
	I1216 05:27:36.558928  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.558937  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:36.558943  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:36.559000  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:36.603084  620659 cri.go:89] found id: ""
	I1216 05:27:36.603106  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.603115  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:36.603120  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:36.603181  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:36.643294  620659 cri.go:89] found id: ""
	I1216 05:27:36.643317  620659 logs.go:282] 0 containers: []
	W1216 05:27:36.643326  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:36.643335  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:36.643346  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:36.764775  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:36.764812  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:36.784979  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:36.785005  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:36.908608  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:36.908628  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:36.908649  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:36.950438  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:36.950519  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:39.498665  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:39.511019  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:39.511095  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:39.545192  620659 cri.go:89] found id: ""
	I1216 05:27:39.545238  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.545248  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:39.545254  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:39.545313  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:39.582848  620659 cri.go:89] found id: ""
	I1216 05:27:39.582870  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.582879  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:39.582885  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:39.582946  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:39.628822  620659 cri.go:89] found id: ""
	I1216 05:27:39.628844  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.628853  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:39.628860  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:39.628918  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:39.657381  620659 cri.go:89] found id: ""
	I1216 05:27:39.657404  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.657414  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:39.657419  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:39.657479  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:39.693908  620659 cri.go:89] found id: ""
	I1216 05:27:39.693948  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.693962  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:39.693973  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:39.694055  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:39.740558  620659 cri.go:89] found id: ""
	I1216 05:27:39.740657  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.740670  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:39.740678  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:39.740758  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:39.779411  620659 cri.go:89] found id: ""
	I1216 05:27:39.779448  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.779458  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:39.779464  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:39.779566  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:39.819788  620659 cri.go:89] found id: ""
	I1216 05:27:39.819814  620659 logs.go:282] 0 containers: []
	W1216 05:27:39.819823  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:39.819834  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:39.819847  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:39.922238  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:39.922258  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:39.922270  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:39.957909  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:39.957948  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:40.004503  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:40.004563  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:40.095485  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:40.095528  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:42.625204  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:42.639187  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:42.639259  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:42.678151  620659 cri.go:89] found id: ""
	I1216 05:27:42.678180  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.678188  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:42.678195  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:42.678257  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:42.711338  620659 cri.go:89] found id: ""
	I1216 05:27:42.711361  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.711369  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:42.711375  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:42.711440  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:42.741052  620659 cri.go:89] found id: ""
	I1216 05:27:42.741134  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.741144  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:42.741150  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:42.741208  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:42.769780  620659 cri.go:89] found id: ""
	I1216 05:27:42.769808  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.769817  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:42.769823  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:42.769883  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:42.797674  620659 cri.go:89] found id: ""
	I1216 05:27:42.797702  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.797710  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:42.797716  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:42.797779  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:42.833609  620659 cri.go:89] found id: ""
	I1216 05:27:42.833636  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.833646  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:42.833665  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:42.833734  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:42.862331  620659 cri.go:89] found id: ""
	I1216 05:27:42.862359  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.862368  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:42.862373  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:42.862436  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:42.893456  620659 cri.go:89] found id: ""
	I1216 05:27:42.893483  620659 logs.go:282] 0 containers: []
	W1216 05:27:42.893492  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:42.893501  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:42.893515  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:42.973523  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:42.973608  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:42.990990  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:42.991015  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:43.099209  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:43.099280  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:43.099309  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:43.146971  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:43.147056  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:45.690780  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:45.701042  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:45.701231  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:45.742482  620659 cri.go:89] found id: ""
	I1216 05:27:45.742507  620659 logs.go:282] 0 containers: []
	W1216 05:27:45.742516  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:45.742522  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:45.742585  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:45.784555  620659 cri.go:89] found id: ""
	I1216 05:27:45.784579  620659 logs.go:282] 0 containers: []
	W1216 05:27:45.784659  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:45.784670  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:45.784746  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:45.839123  620659 cri.go:89] found id: ""
	I1216 05:27:45.839145  620659 logs.go:282] 0 containers: []
	W1216 05:27:45.839154  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:45.839160  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:45.839224  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:45.883172  620659 cri.go:89] found id: ""
	I1216 05:27:45.883202  620659 logs.go:282] 0 containers: []
	W1216 05:27:45.883213  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:45.883219  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:45.883283  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:45.936918  620659 cri.go:89] found id: ""
	I1216 05:27:45.936941  620659 logs.go:282] 0 containers: []
	W1216 05:27:45.936949  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:45.936955  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:45.937118  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:45.999913  620659 cri.go:89] found id: ""
	I1216 05:27:45.999960  620659 logs.go:282] 0 containers: []
	W1216 05:27:45.999969  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:45.999975  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:46.000040  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:46.053919  620659 cri.go:89] found id: ""
	I1216 05:27:46.053943  620659 logs.go:282] 0 containers: []
	W1216 05:27:46.054022  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:46.054030  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:46.054100  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:46.118598  620659 cri.go:89] found id: ""
	I1216 05:27:46.118620  620659 logs.go:282] 0 containers: []
	W1216 05:27:46.118629  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:46.118645  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:46.118664  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:46.300764  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:46.300789  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:46.409607  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:46.409691  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:46.432538  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:46.432566  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:46.595659  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:46.595677  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:46.595689  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:49.141217  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:49.151942  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:49.152009  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:49.187931  620659 cri.go:89] found id: ""
	I1216 05:27:49.187953  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.187961  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:49.187968  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:49.188029  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:49.215626  620659 cri.go:89] found id: ""
	I1216 05:27:49.215649  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.215657  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:49.215663  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:49.215721  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:49.255362  620659 cri.go:89] found id: ""
	I1216 05:27:49.255385  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.255393  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:49.255403  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:49.255469  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:49.290127  620659 cri.go:89] found id: ""
	I1216 05:27:49.290150  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.290158  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:49.290165  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:49.290225  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:49.336002  620659 cri.go:89] found id: ""
	I1216 05:27:49.336024  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.336033  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:49.336046  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:49.336101  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:49.389282  620659 cri.go:89] found id: ""
	I1216 05:27:49.389305  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.389315  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:49.389321  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:49.389385  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:49.455723  620659 cri.go:89] found id: ""
	I1216 05:27:49.455746  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.455755  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:49.455762  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:49.455823  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:49.490955  620659 cri.go:89] found id: ""
	I1216 05:27:49.491036  620659 logs.go:282] 0 containers: []
	W1216 05:27:49.491059  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:49.491080  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:49.491127  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:49.529928  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:49.530007  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:49.603318  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:49.603360  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:49.620999  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:49.621029  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:49.718627  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:49.718649  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:49.718662  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:52.257588  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:52.268706  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:52.268775  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:52.296831  620659 cri.go:89] found id: ""
	I1216 05:27:52.296853  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.296861  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:52.296867  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:52.296937  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:52.343378  620659 cri.go:89] found id: ""
	I1216 05:27:52.343401  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.343410  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:52.343416  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:52.343474  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:52.407352  620659 cri.go:89] found id: ""
	I1216 05:27:52.407374  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.407382  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:52.407388  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:52.407445  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:52.459683  620659 cri.go:89] found id: ""
	I1216 05:27:52.459709  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.459719  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:52.459725  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:52.459781  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:52.495251  620659 cri.go:89] found id: ""
	I1216 05:27:52.495278  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.495287  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:52.495294  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:52.495355  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:52.524077  620659 cri.go:89] found id: ""
	I1216 05:27:52.524099  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.524108  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:52.524115  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:52.524178  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:52.562899  620659 cri.go:89] found id: ""
	I1216 05:27:52.562922  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.562931  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:52.562937  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:52.562997  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:52.596389  620659 cri.go:89] found id: ""
	I1216 05:27:52.596415  620659 logs.go:282] 0 containers: []
	W1216 05:27:52.596424  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:52.596433  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:52.596446  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:52.698964  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:52.699068  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:52.719859  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:52.719889  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:52.805828  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:52.805850  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:52.805863  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:52.838893  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:52.838926  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:55.379724  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:55.391788  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:55.391959  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:55.424925  620659 cri.go:89] found id: ""
	I1216 05:27:55.424998  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.425021  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:55.425039  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:55.425157  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:55.454658  620659 cri.go:89] found id: ""
	I1216 05:27:55.454738  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.454762  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:55.454780  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:55.454884  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:55.484377  620659 cri.go:89] found id: ""
	I1216 05:27:55.484465  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.484488  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:55.484507  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:55.484622  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:55.511758  620659 cri.go:89] found id: ""
	I1216 05:27:55.511795  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.511831  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:55.511844  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:55.511920  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:55.555407  620659 cri.go:89] found id: ""
	I1216 05:27:55.555447  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.555456  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:55.555462  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:55.555566  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:55.629189  620659 cri.go:89] found id: ""
	I1216 05:27:55.629269  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.629293  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:55.629313  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:55.629390  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:55.687465  620659 cri.go:89] found id: ""
	I1216 05:27:55.687546  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.687578  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:55.687598  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:55.687687  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:55.728871  620659 cri.go:89] found id: ""
	I1216 05:27:55.728898  620659 logs.go:282] 0 containers: []
	W1216 05:27:55.728906  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:55.728916  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:55.728952  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:55.804392  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:55.804431  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:55.830193  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:55.830226  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:55.928983  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:27:55.929007  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:55.929020  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:55.964276  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:55.964315  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:58.514953  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:27:58.526128  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:27:58.526197  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:27:58.553478  620659 cri.go:89] found id: ""
	I1216 05:27:58.553501  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.553509  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:27:58.553516  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:27:58.553573  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:27:58.616109  620659 cri.go:89] found id: ""
	I1216 05:27:58.616133  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.616141  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:27:58.616147  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:27:58.616206  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:27:58.671748  620659 cri.go:89] found id: ""
	I1216 05:27:58.671770  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.672016  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:27:58.672029  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:27:58.672096  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:27:58.712871  620659 cri.go:89] found id: ""
	I1216 05:27:58.712893  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.712901  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:27:58.712907  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:27:58.712962  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:27:58.741856  620659 cri.go:89] found id: ""
	I1216 05:27:58.741936  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.741970  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:27:58.741993  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:27:58.742088  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:27:58.774454  620659 cri.go:89] found id: ""
	I1216 05:27:58.774543  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.774573  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:27:58.774608  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:27:58.774725  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:27:58.820732  620659 cri.go:89] found id: ""
	I1216 05:27:58.820803  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.820837  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:27:58.820863  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:27:58.820951  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:27:58.873249  620659 cri.go:89] found id: ""
	I1216 05:27:58.873276  620659 logs.go:282] 0 containers: []
	W1216 05:27:58.873285  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:27:58.873293  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:27:58.873306  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:27:58.916521  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:27:58.916565  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:27:58.958525  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:27:58.958554  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:27:59.034092  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:27:59.034129  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:27:59.053539  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:27:59.053568  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:27:59.133575  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:01.633825  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:01.647662  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:01.647787  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:01.683647  620659 cri.go:89] found id: ""
	I1216 05:28:01.683671  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.683704  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:01.683717  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:01.683794  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:01.719402  620659 cri.go:89] found id: ""
	I1216 05:28:01.719430  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.719438  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:01.719445  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:01.719509  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:01.754062  620659 cri.go:89] found id: ""
	I1216 05:28:01.754087  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.754096  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:01.754102  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:01.754159  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:01.793154  620659 cri.go:89] found id: ""
	I1216 05:28:01.793180  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.793190  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:01.793196  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:01.793254  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:01.835811  620659 cri.go:89] found id: ""
	I1216 05:28:01.835836  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.835845  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:01.835852  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:01.835919  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:01.864662  620659 cri.go:89] found id: ""
	I1216 05:28:01.864686  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.864694  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:01.864700  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:01.864756  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:01.898515  620659 cri.go:89] found id: ""
	I1216 05:28:01.898544  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.898554  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:01.898560  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:01.898615  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:01.930979  620659 cri.go:89] found id: ""
	I1216 05:28:01.931002  620659 logs.go:282] 0 containers: []
	W1216 05:28:01.931010  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:01.931019  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:01.931031  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:01.947520  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:01.947552  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:02.046555  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:02.046577  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:02.046591  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:02.080301  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:02.080339  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:02.122660  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:02.122690  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:04.697372  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:04.707601  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:04.707725  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:04.770365  620659 cri.go:89] found id: ""
	I1216 05:28:04.770438  620659 logs.go:282] 0 containers: []
	W1216 05:28:04.770470  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:04.770490  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:04.770594  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:04.823563  620659 cri.go:89] found id: ""
	I1216 05:28:04.823645  620659 logs.go:282] 0 containers: []
	W1216 05:28:04.823669  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:04.823687  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:04.823791  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:04.885935  620659 cri.go:89] found id: ""
	I1216 05:28:04.886019  620659 logs.go:282] 0 containers: []
	W1216 05:28:04.886042  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:04.886079  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:04.886175  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:04.935949  620659 cri.go:89] found id: ""
	I1216 05:28:04.936026  620659 logs.go:282] 0 containers: []
	W1216 05:28:04.936055  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:04.936074  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:04.936157  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:04.987906  620659 cri.go:89] found id: ""
	I1216 05:28:04.987984  620659 logs.go:282] 0 containers: []
	W1216 05:28:04.988024  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:04.988046  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:04.988134  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:05.026514  620659 cri.go:89] found id: ""
	I1216 05:28:05.026590  620659 logs.go:282] 0 containers: []
	W1216 05:28:05.026613  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:05.026635  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:05.026749  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:05.079362  620659 cri.go:89] found id: ""
	I1216 05:28:05.079450  620659 logs.go:282] 0 containers: []
	W1216 05:28:05.079481  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:05.079502  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:05.079611  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:05.114140  620659 cri.go:89] found id: ""
	I1216 05:28:05.114215  620659 logs.go:282] 0 containers: []
	W1216 05:28:05.114237  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:05.114259  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:05.114296  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:05.206929  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:05.206969  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:05.226112  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:05.226142  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:05.365492  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:05.365517  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:05.365531  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:05.414371  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:05.414407  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:07.972772  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:07.982956  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:07.983024  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:08.014923  620659 cri.go:89] found id: ""
	I1216 05:28:08.014947  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.014956  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:08.014962  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:08.015023  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:08.047808  620659 cri.go:89] found id: ""
	I1216 05:28:08.047832  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.047841  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:08.047847  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:08.047908  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:08.076051  620659 cri.go:89] found id: ""
	I1216 05:28:08.076074  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.076089  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:08.076095  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:08.076155  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:08.102464  620659 cri.go:89] found id: ""
	I1216 05:28:08.102487  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.102496  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:08.102503  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:08.102560  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:08.128678  620659 cri.go:89] found id: ""
	I1216 05:28:08.128701  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.128710  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:08.128716  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:08.128775  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:08.155463  620659 cri.go:89] found id: ""
	I1216 05:28:08.155487  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.155495  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:08.155502  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:08.155564  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:08.186454  620659 cri.go:89] found id: ""
	I1216 05:28:08.186485  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.186494  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:08.186500  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:08.186561  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:08.211737  620659 cri.go:89] found id: ""
	I1216 05:28:08.211766  620659 logs.go:282] 0 containers: []
	W1216 05:28:08.211775  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:08.211784  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:08.211797  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:08.244150  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:08.244187  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:08.291774  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:08.291848  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:08.370607  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:08.370655  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:08.388137  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:08.388168  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:08.466406  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:10.966618  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:10.983665  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:10.983742  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:11.025948  620659 cri.go:89] found id: ""
	I1216 05:28:11.025975  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.025984  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:11.025990  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:11.026056  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:11.072412  620659 cri.go:89] found id: ""
	I1216 05:28:11.072439  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.072448  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:11.072454  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:11.072517  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:11.117867  620659 cri.go:89] found id: ""
	I1216 05:28:11.117893  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.117902  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:11.117908  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:11.117972  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:11.176004  620659 cri.go:89] found id: ""
	I1216 05:28:11.176031  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.176041  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:11.176048  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:11.176109  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:11.208123  620659 cri.go:89] found id: ""
	I1216 05:28:11.208154  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.208163  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:11.208172  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:11.208244  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:11.282145  620659 cri.go:89] found id: ""
	I1216 05:28:11.282178  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.282192  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:11.282199  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:11.282270  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:11.343676  620659 cri.go:89] found id: ""
	I1216 05:28:11.343700  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.343709  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:11.343715  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:11.343810  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:11.397960  620659 cri.go:89] found id: ""
	I1216 05:28:11.398040  620659 logs.go:282] 0 containers: []
	W1216 05:28:11.398049  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:11.398058  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:11.398070  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:11.538576  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:11.538631  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:11.561021  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:11.561058  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:11.738326  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:11.738358  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:11.738372  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:11.795376  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:11.795500  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:14.363382  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:14.375398  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:14.375476  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:14.408673  620659 cri.go:89] found id: ""
	I1216 05:28:14.408697  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.408706  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:14.408712  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:14.408781  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:14.455985  620659 cri.go:89] found id: ""
	I1216 05:28:14.456007  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.456016  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:14.456032  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:14.456103  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:14.494023  620659 cri.go:89] found id: ""
	I1216 05:28:14.494043  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.494052  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:14.494058  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:14.494116  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:14.538988  620659 cri.go:89] found id: ""
	I1216 05:28:14.539011  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.539020  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:14.539026  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:14.539085  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:14.571896  620659 cri.go:89] found id: ""
	I1216 05:28:14.571918  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.571926  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:14.571932  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:14.571989  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:14.615030  620659 cri.go:89] found id: ""
	I1216 05:28:14.615051  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.615059  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:14.615065  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:14.615125  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:14.659434  620659 cri.go:89] found id: ""
	I1216 05:28:14.659460  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.659469  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:14.659480  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:14.659554  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:14.713729  620659 cri.go:89] found id: ""
	I1216 05:28:14.713757  620659 logs.go:282] 0 containers: []
	W1216 05:28:14.713766  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:14.713775  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:14.713788  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:14.768037  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:14.768063  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:14.862291  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:14.862342  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:14.888858  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:14.888890  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:14.974783  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:14.974802  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:14.974817  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:17.513196  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:17.523543  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:17.523613  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:17.553571  620659 cri.go:89] found id: ""
	I1216 05:28:17.553595  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.553603  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:17.553617  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:17.553679  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:17.600519  620659 cri.go:89] found id: ""
	I1216 05:28:17.600541  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.600550  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:17.600556  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:17.600613  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:17.629461  620659 cri.go:89] found id: ""
	I1216 05:28:17.629483  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.629492  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:17.629498  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:17.629556  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:17.661606  620659 cri.go:89] found id: ""
	I1216 05:28:17.661644  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.661652  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:17.661658  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:17.661718  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:17.694435  620659 cri.go:89] found id: ""
	I1216 05:28:17.694471  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.694480  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:17.694486  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:17.694546  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:17.732523  620659 cri.go:89] found id: ""
	I1216 05:28:17.732552  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.732560  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:17.732567  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:17.732628  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:17.769949  620659 cri.go:89] found id: ""
	I1216 05:28:17.769973  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.769981  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:17.769987  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:17.770045  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:17.798188  620659 cri.go:89] found id: ""
	I1216 05:28:17.798211  620659 logs.go:282] 0 containers: []
	W1216 05:28:17.798220  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:17.798229  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:17.798242  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:17.880173  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:17.880252  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:17.904629  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:17.904657  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:17.999994  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:18.000015  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:18.000027  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:18.045572  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:18.045617  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:20.604155  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:20.620794  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:20.620862  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:20.648978  620659 cri.go:89] found id: ""
	I1216 05:28:20.649007  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.649017  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:20.649024  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:20.649097  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:20.679497  620659 cri.go:89] found id: ""
	I1216 05:28:20.679520  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.679529  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:20.679537  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:20.679595  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:20.714718  620659 cri.go:89] found id: ""
	I1216 05:28:20.714741  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.714749  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:20.714755  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:20.714816  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:20.747367  620659 cri.go:89] found id: ""
	I1216 05:28:20.747389  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.747397  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:20.747403  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:20.747461  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:20.783971  620659 cri.go:89] found id: ""
	I1216 05:28:20.783993  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.784002  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:20.784008  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:20.784065  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:20.815574  620659 cri.go:89] found id: ""
	I1216 05:28:20.815598  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.815606  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:20.815619  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:20.815680  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:20.851236  620659 cri.go:89] found id: ""
	I1216 05:28:20.851258  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.851266  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:20.851273  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:20.851334  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:20.886615  620659 cri.go:89] found id: ""
	I1216 05:28:20.886697  620659 logs.go:282] 0 containers: []
	W1216 05:28:20.886721  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:20.886745  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:20.886796  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:20.902501  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:20.902581  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:20.994220  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:20.994239  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:20.994251  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:21.026853  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:21.026938  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:21.088311  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:21.088345  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:23.673432  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:23.683742  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:23.683816  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:23.735853  620659 cri.go:89] found id: ""
	I1216 05:28:23.735882  620659 logs.go:282] 0 containers: []
	W1216 05:28:23.735891  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:23.735898  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:23.735956  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:23.791783  620659 cri.go:89] found id: ""
	I1216 05:28:23.791810  620659 logs.go:282] 0 containers: []
	W1216 05:28:23.791820  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:23.791825  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:23.791882  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:23.852613  620659 cri.go:89] found id: ""
	I1216 05:28:23.852641  620659 logs.go:282] 0 containers: []
	W1216 05:28:23.852650  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:23.852656  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:23.852713  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:23.904272  620659 cri.go:89] found id: ""
	I1216 05:28:23.904310  620659 logs.go:282] 0 containers: []
	W1216 05:28:23.904319  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:23.904326  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:23.904393  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:23.945545  620659 cri.go:89] found id: ""
	I1216 05:28:23.945580  620659 logs.go:282] 0 containers: []
	W1216 05:28:23.945590  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:23.945598  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:23.945679  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:23.985938  620659 cri.go:89] found id: ""
	I1216 05:28:23.985973  620659 logs.go:282] 0 containers: []
	W1216 05:28:23.985983  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:23.985990  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:23.986056  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:24.035946  620659 cri.go:89] found id: ""
	I1216 05:28:24.036028  620659 logs.go:282] 0 containers: []
	W1216 05:28:24.036052  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:24.036073  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:24.036156  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:24.109509  620659 cri.go:89] found id: ""
	I1216 05:28:24.109531  620659 logs.go:282] 0 containers: []
	W1216 05:28:24.109540  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:24.109549  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:24.109560  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:24.219590  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:24.219670  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:24.247591  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:24.247619  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:24.334626  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:24.334694  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:24.334721  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:24.370261  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:24.373142  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:26.943916  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:26.953885  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:26.953956  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:26.981500  620659 cri.go:89] found id: ""
	I1216 05:28:26.981525  620659 logs.go:282] 0 containers: []
	W1216 05:28:26.981535  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:26.981541  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:26.981598  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:27.008383  620659 cri.go:89] found id: ""
	I1216 05:28:27.008416  620659 logs.go:282] 0 containers: []
	W1216 05:28:27.008425  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:27.008432  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:27.008506  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:27.034456  620659 cri.go:89] found id: ""
	I1216 05:28:27.034479  620659 logs.go:282] 0 containers: []
	W1216 05:28:27.034487  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:27.034493  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:27.034553  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:27.059328  620659 cri.go:89] found id: ""
	I1216 05:28:27.059353  620659 logs.go:282] 0 containers: []
	W1216 05:28:27.059362  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:27.059368  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:27.059426  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:27.086717  620659 cri.go:89] found id: ""
	I1216 05:28:27.086744  620659 logs.go:282] 0 containers: []
	W1216 05:28:27.086753  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:27.086759  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:27.086820  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:27.119703  620659 cri.go:89] found id: ""
	I1216 05:28:27.119729  620659 logs.go:282] 0 containers: []
	W1216 05:28:27.119738  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:27.119744  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:27.119851  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:27.148415  620659 cri.go:89] found id: ""
	I1216 05:28:27.148441  620659 logs.go:282] 0 containers: []
	W1216 05:28:27.148450  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:27.148456  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:27.148520  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:27.173938  620659 cri.go:89] found id: ""
	I1216 05:28:27.173966  620659 logs.go:282] 0 containers: []
	W1216 05:28:27.173975  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:27.173984  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:27.173996  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:27.242356  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:27.242394  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:27.258605  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:27.258633  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:27.324872  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:27.324898  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:27.324916  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:27.356025  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:27.356062  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:29.886564  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:29.896285  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:29.896362  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:29.921631  620659 cri.go:89] found id: ""
	I1216 05:28:29.921702  620659 logs.go:282] 0 containers: []
	W1216 05:28:29.921725  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:29.921744  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:29.921833  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:29.947678  620659 cri.go:89] found id: ""
	I1216 05:28:29.947743  620659 logs.go:282] 0 containers: []
	W1216 05:28:29.947769  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:29.947786  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:29.947868  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:29.973836  620659 cri.go:89] found id: ""
	I1216 05:28:29.973860  620659 logs.go:282] 0 containers: []
	W1216 05:28:29.973869  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:29.973875  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:29.973949  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:29.998773  620659 cri.go:89] found id: ""
	I1216 05:28:29.998797  620659 logs.go:282] 0 containers: []
	W1216 05:28:29.998806  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:29.998812  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:29.998871  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:30.038848  620659 cri.go:89] found id: ""
	I1216 05:28:30.038888  620659 logs.go:282] 0 containers: []
	W1216 05:28:30.038898  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:30.038904  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:30.038977  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:30.087929  620659 cri.go:89] found id: ""
	I1216 05:28:30.087955  620659 logs.go:282] 0 containers: []
	W1216 05:28:30.087964  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:30.087971  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:30.088035  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:30.119151  620659 cri.go:89] found id: ""
	I1216 05:28:30.119179  620659 logs.go:282] 0 containers: []
	W1216 05:28:30.119188  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:30.119194  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:30.119255  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:30.151241  620659 cri.go:89] found id: ""
	I1216 05:28:30.151269  620659 logs.go:282] 0 containers: []
	W1216 05:28:30.151278  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:30.151287  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:30.151300  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:30.220962  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:30.220999  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:30.239283  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:30.239309  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:30.320801  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:30.320821  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:30.320833  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:30.351684  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:30.351719  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:32.882169  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:32.892485  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:32.892563  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:32.918439  620659 cri.go:89] found id: ""
	I1216 05:28:32.918465  620659 logs.go:282] 0 containers: []
	W1216 05:28:32.918474  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:32.918480  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:32.918537  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:32.943813  620659 cri.go:89] found id: ""
	I1216 05:28:32.943838  620659 logs.go:282] 0 containers: []
	W1216 05:28:32.943848  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:32.943856  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:32.943914  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:32.969710  620659 cri.go:89] found id: ""
	I1216 05:28:32.969736  620659 logs.go:282] 0 containers: []
	W1216 05:28:32.969745  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:32.969751  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:32.969812  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:32.993639  620659 cri.go:89] found id: ""
	I1216 05:28:32.993661  620659 logs.go:282] 0 containers: []
	W1216 05:28:32.993670  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:32.993679  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:32.993737  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:33.020006  620659 cri.go:89] found id: ""
	I1216 05:28:33.020029  620659 logs.go:282] 0 containers: []
	W1216 05:28:33.020039  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:33.020044  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:33.020106  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:33.049487  620659 cri.go:89] found id: ""
	I1216 05:28:33.049512  620659 logs.go:282] 0 containers: []
	W1216 05:28:33.049521  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:33.049527  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:33.049589  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:33.091164  620659 cri.go:89] found id: ""
	I1216 05:28:33.091190  620659 logs.go:282] 0 containers: []
	W1216 05:28:33.091200  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:33.091206  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:33.091265  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:33.125126  620659 cri.go:89] found id: ""
	I1216 05:28:33.125153  620659 logs.go:282] 0 containers: []
	W1216 05:28:33.125163  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:33.125173  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:33.125185  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:33.198042  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:33.198078  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:33.214289  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:33.214320  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:33.285409  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:33.285428  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:33.285441  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:33.316606  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:33.316640  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:35.847922  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:35.858366  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:35.858437  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:35.885532  620659 cri.go:89] found id: ""
	I1216 05:28:35.885556  620659 logs.go:282] 0 containers: []
	W1216 05:28:35.885565  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:35.885573  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:35.885637  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:35.910588  620659 cri.go:89] found id: ""
	I1216 05:28:35.910611  620659 logs.go:282] 0 containers: []
	W1216 05:28:35.910619  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:35.910625  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:35.910682  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:35.936489  620659 cri.go:89] found id: ""
	I1216 05:28:35.936513  620659 logs.go:282] 0 containers: []
	W1216 05:28:35.936521  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:35.936527  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:35.936587  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:35.966103  620659 cri.go:89] found id: ""
	I1216 05:28:35.966127  620659 logs.go:282] 0 containers: []
	W1216 05:28:35.966135  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:35.966141  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:35.966203  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:35.991922  620659 cri.go:89] found id: ""
	I1216 05:28:35.991949  620659 logs.go:282] 0 containers: []
	W1216 05:28:35.991958  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:35.991964  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:35.992023  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:36.027272  620659 cri.go:89] found id: ""
	I1216 05:28:36.027297  620659 logs.go:282] 0 containers: []
	W1216 05:28:36.027306  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:36.027314  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:36.027386  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:36.054186  620659 cri.go:89] found id: ""
	I1216 05:28:36.054211  620659 logs.go:282] 0 containers: []
	W1216 05:28:36.054221  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:36.054227  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:36.054292  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:36.105984  620659 cri.go:89] found id: ""
	I1216 05:28:36.106012  620659 logs.go:282] 0 containers: []
	W1216 05:28:36.106021  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:36.106030  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:36.106042  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:36.188534  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:36.188610  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:36.204878  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:36.204914  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:36.269863  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:36.269888  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:36.269902  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:36.301449  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:36.301484  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:38.830474  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:38.840755  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:38.840828  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:38.870050  620659 cri.go:89] found id: ""
	I1216 05:28:38.870115  620659 logs.go:282] 0 containers: []
	W1216 05:28:38.870133  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:38.870141  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:38.870206  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:38.899847  620659 cri.go:89] found id: ""
	I1216 05:28:38.899881  620659 logs.go:282] 0 containers: []
	W1216 05:28:38.899890  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:38.899896  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:38.899963  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:38.929523  620659 cri.go:89] found id: ""
	I1216 05:28:38.929548  620659 logs.go:282] 0 containers: []
	W1216 05:28:38.929558  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:38.929570  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:38.929636  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:38.959333  620659 cri.go:89] found id: ""
	I1216 05:28:38.959401  620659 logs.go:282] 0 containers: []
	W1216 05:28:38.959424  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:38.959438  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:38.959510  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:38.984344  620659 cri.go:89] found id: ""
	I1216 05:28:38.984385  620659 logs.go:282] 0 containers: []
	W1216 05:28:38.984394  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:38.984401  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:38.984470  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:39.012890  620659 cri.go:89] found id: ""
	I1216 05:28:39.012926  620659 logs.go:282] 0 containers: []
	W1216 05:28:39.012935  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:39.012943  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:39.013017  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:39.039304  620659 cri.go:89] found id: ""
	I1216 05:28:39.039371  620659 logs.go:282] 0 containers: []
	W1216 05:28:39.039396  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:39.039415  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:39.039498  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:39.064917  620659 cri.go:89] found id: ""
	I1216 05:28:39.064983  620659 logs.go:282] 0 containers: []
	W1216 05:28:39.065005  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:39.065026  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:39.065080  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:39.109532  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:39.109562  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:39.187553  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:39.187591  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:39.204838  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:39.204918  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:39.268242  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:39.268307  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:39.268326  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:41.801227  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:41.811603  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:41.811679  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:41.836680  620659 cri.go:89] found id: ""
	I1216 05:28:41.836708  620659 logs.go:282] 0 containers: []
	W1216 05:28:41.836727  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:41.836734  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:41.836793  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:41.864065  620659 cri.go:89] found id: ""
	I1216 05:28:41.864090  620659 logs.go:282] 0 containers: []
	W1216 05:28:41.864099  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:41.864105  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:41.864164  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:41.889035  620659 cri.go:89] found id: ""
	I1216 05:28:41.889060  620659 logs.go:282] 0 containers: []
	W1216 05:28:41.889088  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:41.889094  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:41.889157  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:41.915903  620659 cri.go:89] found id: ""
	I1216 05:28:41.915927  620659 logs.go:282] 0 containers: []
	W1216 05:28:41.915935  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:41.915942  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:41.916003  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:41.941030  620659 cri.go:89] found id: ""
	I1216 05:28:41.941056  620659 logs.go:282] 0 containers: []
	W1216 05:28:41.941086  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:41.941092  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:41.941150  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:41.971186  620659 cri.go:89] found id: ""
	I1216 05:28:41.971208  620659 logs.go:282] 0 containers: []
	W1216 05:28:41.971217  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:41.971224  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:41.971282  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:41.995908  620659 cri.go:89] found id: ""
	I1216 05:28:41.995930  620659 logs.go:282] 0 containers: []
	W1216 05:28:41.995938  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:41.995944  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:41.996000  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:42.032169  620659 cri.go:89] found id: ""
	I1216 05:28:42.032194  620659 logs.go:282] 0 containers: []
	W1216 05:28:42.032203  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:42.032213  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:42.032226  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:42.102853  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:42.102908  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:42.127560  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:42.127591  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:42.216457  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:42.216534  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:42.216566  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:42.260750  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:42.260798  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:44.795291  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:44.805508  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:44.805588  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:44.835139  620659 cri.go:89] found id: ""
	I1216 05:28:44.835164  620659 logs.go:282] 0 containers: []
	W1216 05:28:44.835173  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:44.835179  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:44.835238  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:44.869578  620659 cri.go:89] found id: ""
	I1216 05:28:44.869602  620659 logs.go:282] 0 containers: []
	W1216 05:28:44.869610  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:44.869616  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:44.869695  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:44.895303  620659 cri.go:89] found id: ""
	I1216 05:28:44.895341  620659 logs.go:282] 0 containers: []
	W1216 05:28:44.895356  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:44.895363  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:44.895424  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:44.920685  620659 cri.go:89] found id: ""
	I1216 05:28:44.920709  620659 logs.go:282] 0 containers: []
	W1216 05:28:44.920718  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:44.920724  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:44.920784  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:44.948570  620659 cri.go:89] found id: ""
	I1216 05:28:44.948597  620659 logs.go:282] 0 containers: []
	W1216 05:28:44.948606  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:44.948613  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:44.948679  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:44.974939  620659 cri.go:89] found id: ""
	I1216 05:28:44.974962  620659 logs.go:282] 0 containers: []
	W1216 05:28:44.974971  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:44.974978  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:44.975038  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:45.004514  620659 cri.go:89] found id: ""
	I1216 05:28:45.004538  620659 logs.go:282] 0 containers: []
	W1216 05:28:45.004547  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:45.004554  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:45.004629  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:45.088175  620659 cri.go:89] found id: ""
	I1216 05:28:45.088202  620659 logs.go:282] 0 containers: []
	W1216 05:28:45.088212  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:45.088223  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:45.088237  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:45.151670  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:45.151708  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:45.238753  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:45.238851  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:45.265795  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:45.265848  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:45.349923  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:45.349946  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:45.349960  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:47.889205  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:47.899505  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:47.899582  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:47.923866  620659 cri.go:89] found id: ""
	I1216 05:28:47.923893  620659 logs.go:282] 0 containers: []
	W1216 05:28:47.923902  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:47.923909  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:47.923976  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:47.951615  620659 cri.go:89] found id: ""
	I1216 05:28:47.951641  620659 logs.go:282] 0 containers: []
	W1216 05:28:47.951650  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:47.951656  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:47.951715  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:47.978474  620659 cri.go:89] found id: ""
	I1216 05:28:47.978499  620659 logs.go:282] 0 containers: []
	W1216 05:28:47.978507  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:47.978514  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:47.978573  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:48.007806  620659 cri.go:89] found id: ""
	I1216 05:28:48.007838  620659 logs.go:282] 0 containers: []
	W1216 05:28:48.007848  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:48.007856  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:48.007931  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:48.040822  620659 cri.go:89] found id: ""
	I1216 05:28:48.040851  620659 logs.go:282] 0 containers: []
	W1216 05:28:48.040860  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:48.040867  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:48.040930  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:48.078073  620659 cri.go:89] found id: ""
	I1216 05:28:48.078101  620659 logs.go:282] 0 containers: []
	W1216 05:28:48.078110  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:48.078117  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:48.078175  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:48.109203  620659 cri.go:89] found id: ""
	I1216 05:28:48.109229  620659 logs.go:282] 0 containers: []
	W1216 05:28:48.109244  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:48.109252  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:48.109311  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:48.152181  620659 cri.go:89] found id: ""
	I1216 05:28:48.152210  620659 logs.go:282] 0 containers: []
	W1216 05:28:48.152219  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:48.152229  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:48.152241  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:48.221654  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:48.221693  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:48.237455  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:48.237488  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:48.309321  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:48.309344  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:48.309356  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:48.340045  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:48.340081  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:50.877043  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:50.887355  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:50.887424  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:50.915841  620659 cri.go:89] found id: ""
	I1216 05:28:50.915868  620659 logs.go:282] 0 containers: []
	W1216 05:28:50.915878  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:50.915884  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:50.915941  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:50.945461  620659 cri.go:89] found id: ""
	I1216 05:28:50.945490  620659 logs.go:282] 0 containers: []
	W1216 05:28:50.945499  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:50.945505  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:50.945569  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:50.972657  620659 cri.go:89] found id: ""
	I1216 05:28:50.972683  620659 logs.go:282] 0 containers: []
	W1216 05:28:50.972692  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:50.972700  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:50.972762  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:51.004357  620659 cri.go:89] found id: ""
	I1216 05:28:51.004386  620659 logs.go:282] 0 containers: []
	W1216 05:28:51.004396  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:51.004403  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:51.004482  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:51.033106  620659 cri.go:89] found id: ""
	I1216 05:28:51.033131  620659 logs.go:282] 0 containers: []
	W1216 05:28:51.033141  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:51.033164  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:51.033239  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:51.058238  620659 cri.go:89] found id: ""
	I1216 05:28:51.058264  620659 logs.go:282] 0 containers: []
	W1216 05:28:51.058273  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:51.058279  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:51.058367  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:51.087294  620659 cri.go:89] found id: ""
	I1216 05:28:51.087370  620659 logs.go:282] 0 containers: []
	W1216 05:28:51.087393  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:51.087413  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:51.087500  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:51.123790  620659 cri.go:89] found id: ""
	I1216 05:28:51.123818  620659 logs.go:282] 0 containers: []
	W1216 05:28:51.123828  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:51.123837  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:51.123849  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:51.190585  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:51.190619  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:51.206630  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:51.206658  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:51.275166  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:51.275187  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:51.275199  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:51.310725  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:51.310770  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:53.839603  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:53.849690  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:53.849763  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:53.875612  620659 cri.go:89] found id: ""
	I1216 05:28:53.875680  620659 logs.go:282] 0 containers: []
	W1216 05:28:53.875702  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:53.875722  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:53.875814  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:53.901295  620659 cri.go:89] found id: ""
	I1216 05:28:53.901365  620659 logs.go:282] 0 containers: []
	W1216 05:28:53.901392  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:53.901410  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:53.901496  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:53.927198  620659 cri.go:89] found id: ""
	I1216 05:28:53.927266  620659 logs.go:282] 0 containers: []
	W1216 05:28:53.927282  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:53.927291  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:53.927355  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:53.954552  620659 cri.go:89] found id: ""
	I1216 05:28:53.954579  620659 logs.go:282] 0 containers: []
	W1216 05:28:53.954588  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:53.954594  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:53.954676  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:53.985260  620659 cri.go:89] found id: ""
	I1216 05:28:53.985284  620659 logs.go:282] 0 containers: []
	W1216 05:28:53.985293  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:53.985322  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:53.985403  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:54.020311  620659 cri.go:89] found id: ""
	I1216 05:28:54.020394  620659 logs.go:282] 0 containers: []
	W1216 05:28:54.020437  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:54.020717  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:54.020851  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:54.052114  620659 cri.go:89] found id: ""
	I1216 05:28:54.052142  620659 logs.go:282] 0 containers: []
	W1216 05:28:54.052151  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:54.052157  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:54.052221  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:54.088669  620659 cri.go:89] found id: ""
	I1216 05:28:54.088696  620659 logs.go:282] 0 containers: []
	W1216 05:28:54.088705  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:54.088714  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:54.088725  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:54.121484  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:54.121522  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:54.163379  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:54.163408  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:54.231248  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:54.231289  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:54.247706  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:54.247736  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:54.313055  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:56.813385  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:56.823580  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:56.823652  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:56.849476  620659 cri.go:89] found id: ""
	I1216 05:28:56.849502  620659 logs.go:282] 0 containers: []
	W1216 05:28:56.849510  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:56.849516  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:56.849576  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:56.878959  620659 cri.go:89] found id: ""
	I1216 05:28:56.878991  620659 logs.go:282] 0 containers: []
	W1216 05:28:56.879000  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:56.879006  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:56.879114  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:56.905806  620659 cri.go:89] found id: ""
	I1216 05:28:56.905833  620659 logs.go:282] 0 containers: []
	W1216 05:28:56.905842  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:56.905848  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:56.905910  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:56.932396  620659 cri.go:89] found id: ""
	I1216 05:28:56.932422  620659 logs.go:282] 0 containers: []
	W1216 05:28:56.932431  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:56.932438  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:56.932499  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:56.959623  620659 cri.go:89] found id: ""
	I1216 05:28:56.959649  620659 logs.go:282] 0 containers: []
	W1216 05:28:56.959657  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:56.959663  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:56.959722  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:56.985285  620659 cri.go:89] found id: ""
	I1216 05:28:56.985310  620659 logs.go:282] 0 containers: []
	W1216 05:28:56.985319  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:56.985326  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:56.985384  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:28:57.017916  620659 cri.go:89] found id: ""
	I1216 05:28:57.017942  620659 logs.go:282] 0 containers: []
	W1216 05:28:57.017951  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:28:57.017957  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:28:57.018037  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:28:57.044990  620659 cri.go:89] found id: ""
	I1216 05:28:57.045017  620659 logs.go:282] 0 containers: []
	W1216 05:28:57.045025  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:28:57.045034  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:28:57.045049  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:28:57.124638  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:28:57.124677  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:28:57.145780  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:28:57.145810  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:28:57.213797  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:28:57.213818  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:28:57.213831  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:28:57.244647  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:28:57.244683  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:28:59.773687  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:28:59.783878  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:28:59.783946  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:28:59.818963  620659 cri.go:89] found id: ""
	I1216 05:28:59.818986  620659 logs.go:282] 0 containers: []
	W1216 05:28:59.818995  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:28:59.819001  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:28:59.819065  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:28:59.855061  620659 cri.go:89] found id: ""
	I1216 05:28:59.855085  620659 logs.go:282] 0 containers: []
	W1216 05:28:59.855093  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:28:59.855099  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:28:59.855158  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:28:59.888207  620659 cri.go:89] found id: ""
	I1216 05:28:59.888228  620659 logs.go:282] 0 containers: []
	W1216 05:28:59.888236  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:28:59.888242  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:28:59.888299  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:28:59.921343  620659 cri.go:89] found id: ""
	I1216 05:28:59.921417  620659 logs.go:282] 0 containers: []
	W1216 05:28:59.921440  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:28:59.921457  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:28:59.921546  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:28:59.956876  620659 cri.go:89] found id: ""
	I1216 05:28:59.956952  620659 logs.go:282] 0 containers: []
	W1216 05:28:59.956975  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:28:59.956993  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:28:59.957103  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:28:59.984205  620659 cri.go:89] found id: ""
	I1216 05:28:59.984279  620659 logs.go:282] 0 containers: []
	W1216 05:28:59.984300  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:28:59.984322  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:28:59.984450  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:00.046957  620659 cri.go:89] found id: ""
	I1216 05:29:00.047051  620659 logs.go:282] 0 containers: []
	W1216 05:29:00.047077  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:00.047096  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:00.047224  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:00.142557  620659 cri.go:89] found id: ""
	I1216 05:29:00.142637  620659 logs.go:282] 0 containers: []
	W1216 05:29:00.142662  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:00.142685  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:00.142725  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:00.314633  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:00.314798  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:00.356158  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:00.356206  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:00.443450  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:00.443490  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:00.443510  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:00.479762  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:00.479801  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:03.013190  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:03.024682  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:03.024752  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:03.063169  620659 cri.go:89] found id: ""
	I1216 05:29:03.063190  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.063199  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:03.063205  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:03.063265  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:03.103343  620659 cri.go:89] found id: ""
	I1216 05:29:03.103366  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.103375  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:03.103381  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:03.103443  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:03.140321  620659 cri.go:89] found id: ""
	I1216 05:29:03.140344  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.140353  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:03.140359  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:03.140416  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:03.166822  620659 cri.go:89] found id: ""
	I1216 05:29:03.166849  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.166858  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:03.166864  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:03.166922  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:03.196283  620659 cri.go:89] found id: ""
	I1216 05:29:03.196310  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.196320  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:03.196326  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:03.196387  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:03.223176  620659 cri.go:89] found id: ""
	I1216 05:29:03.223201  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.223210  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:03.223216  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:03.223273  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:03.255606  620659 cri.go:89] found id: ""
	I1216 05:29:03.255631  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.255640  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:03.255646  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:03.255703  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:03.281740  620659 cri.go:89] found id: ""
	I1216 05:29:03.281765  620659 logs.go:282] 0 containers: []
	W1216 05:29:03.281773  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:03.281788  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:03.281799  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:03.369916  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:03.369957  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:03.400804  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:03.400835  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:03.516775  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:03.516798  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:03.516811  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:03.549117  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:03.549149  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:06.083720  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:06.094097  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:06.094181  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:06.120226  620659 cri.go:89] found id: ""
	I1216 05:29:06.120249  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.120258  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:06.120265  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:06.120323  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:06.152062  620659 cri.go:89] found id: ""
	I1216 05:29:06.152085  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.152094  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:06.152100  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:06.152158  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:06.178355  620659 cri.go:89] found id: ""
	I1216 05:29:06.178378  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.178387  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:06.178393  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:06.178453  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:06.207028  620659 cri.go:89] found id: ""
	I1216 05:29:06.207054  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.207063  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:06.207069  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:06.207126  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:06.235842  620659 cri.go:89] found id: ""
	I1216 05:29:06.235869  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.235884  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:06.235891  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:06.235946  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:06.261502  620659 cri.go:89] found id: ""
	I1216 05:29:06.261527  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.261536  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:06.261542  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:06.261599  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:06.286972  620659 cri.go:89] found id: ""
	I1216 05:29:06.286997  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.287006  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:06.287012  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:06.287090  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:06.311637  620659 cri.go:89] found id: ""
	I1216 05:29:06.311661  620659 logs.go:282] 0 containers: []
	W1216 05:29:06.311671  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:06.311680  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:06.311720  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:06.327704  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:06.327733  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:06.392499  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:06.392522  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:06.392537  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:06.422875  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:06.422908  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:06.450640  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:06.450669  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:09.018240  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:09.029221  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:09.029291  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:09.058639  620659 cri.go:89] found id: ""
	I1216 05:29:09.058670  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.058679  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:09.058685  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:09.058747  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:09.084346  620659 cri.go:89] found id: ""
	I1216 05:29:09.084372  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.084381  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:09.084387  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:09.084446  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:09.111931  620659 cri.go:89] found id: ""
	I1216 05:29:09.111957  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.111966  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:09.111972  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:09.112040  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:09.139063  620659 cri.go:89] found id: ""
	I1216 05:29:09.139090  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.139098  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:09.139105  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:09.139182  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:09.166602  620659 cri.go:89] found id: ""
	I1216 05:29:09.166636  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.166645  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:09.166651  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:09.166748  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:09.194240  620659 cri.go:89] found id: ""
	I1216 05:29:09.194279  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.194289  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:09.194297  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:09.194383  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:09.221259  620659 cri.go:89] found id: ""
	I1216 05:29:09.221328  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.221353  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:09.221367  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:09.221440  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:09.251343  620659 cri.go:89] found id: ""
	I1216 05:29:09.251368  620659 logs.go:282] 0 containers: []
	W1216 05:29:09.251377  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:09.251386  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:09.251397  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:09.317874  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:09.317914  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:09.334306  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:09.334335  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:09.404691  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:09.404719  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:09.404737  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:09.436836  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:09.436869  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:11.967909  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:11.977523  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:11.977599  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:12.001724  620659 cri.go:89] found id: ""
	I1216 05:29:12.001750  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.001759  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:12.001765  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:12.001821  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:12.034043  620659 cri.go:89] found id: ""
	I1216 05:29:12.034072  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.034081  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:12.034087  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:12.034152  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:12.059788  620659 cri.go:89] found id: ""
	I1216 05:29:12.059817  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.059825  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:12.059831  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:12.059893  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:12.089676  620659 cri.go:89] found id: ""
	I1216 05:29:12.089700  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.089709  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:12.089722  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:12.089786  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:12.117581  620659 cri.go:89] found id: ""
	I1216 05:29:12.117607  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.117616  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:12.117621  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:12.117686  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:12.146858  620659 cri.go:89] found id: ""
	I1216 05:29:12.146885  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.146894  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:12.146900  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:12.146958  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:12.174697  620659 cri.go:89] found id: ""
	I1216 05:29:12.174723  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.174732  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:12.174738  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:12.174794  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:12.203338  620659 cri.go:89] found id: ""
	I1216 05:29:12.203364  620659 logs.go:282] 0 containers: []
	W1216 05:29:12.203373  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:12.203382  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:12.203394  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:12.235035  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:12.235072  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:12.266318  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:12.266353  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:12.334298  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:12.334335  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:12.350744  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:12.350773  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:12.419878  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:14.921213  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:14.931373  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:14.931443  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:14.958031  620659 cri.go:89] found id: ""
	I1216 05:29:14.958059  620659 logs.go:282] 0 containers: []
	W1216 05:29:14.958068  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:14.958077  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:14.958140  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:14.983995  620659 cri.go:89] found id: ""
	I1216 05:29:14.984027  620659 logs.go:282] 0 containers: []
	W1216 05:29:14.984036  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:14.984043  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:14.984104  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:15.016597  620659 cri.go:89] found id: ""
	I1216 05:29:15.016678  620659 logs.go:282] 0 containers: []
	W1216 05:29:15.016703  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:15.016724  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:15.016830  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:15.047233  620659 cri.go:89] found id: ""
	I1216 05:29:15.047259  620659 logs.go:282] 0 containers: []
	W1216 05:29:15.047268  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:15.047275  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:15.047336  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:15.079279  620659 cri.go:89] found id: ""
	I1216 05:29:15.079304  620659 logs.go:282] 0 containers: []
	W1216 05:29:15.079313  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:15.079320  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:15.079382  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:15.113634  620659 cri.go:89] found id: ""
	I1216 05:29:15.113676  620659 logs.go:282] 0 containers: []
	W1216 05:29:15.113686  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:15.113692  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:15.113762  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:15.138901  620659 cri.go:89] found id: ""
	I1216 05:29:15.138925  620659 logs.go:282] 0 containers: []
	W1216 05:29:15.138934  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:15.138940  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:15.139001  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:15.164029  620659 cri.go:89] found id: ""
	I1216 05:29:15.164052  620659 logs.go:282] 0 containers: []
	W1216 05:29:15.164061  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:15.164069  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:15.164082  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:15.232111  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:15.232147  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:15.248453  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:15.248482  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:15.315833  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:15.315853  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:15.315866  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:15.346837  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:15.346874  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:17.878368  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:17.888468  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:17.888545  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:17.914185  620659 cri.go:89] found id: ""
	I1216 05:29:17.914207  620659 logs.go:282] 0 containers: []
	W1216 05:29:17.914216  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:17.914222  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:17.914280  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:17.939753  620659 cri.go:89] found id: ""
	I1216 05:29:17.939777  620659 logs.go:282] 0 containers: []
	W1216 05:29:17.939786  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:17.939793  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:17.939850  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:17.964621  620659 cri.go:89] found id: ""
	I1216 05:29:17.964645  620659 logs.go:282] 0 containers: []
	W1216 05:29:17.964654  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:17.964659  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:17.964718  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:17.992534  620659 cri.go:89] found id: ""
	I1216 05:29:17.992558  620659 logs.go:282] 0 containers: []
	W1216 05:29:17.992567  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:17.992573  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:17.992640  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:18.041564  620659 cri.go:89] found id: ""
	I1216 05:29:18.041589  620659 logs.go:282] 0 containers: []
	W1216 05:29:18.041599  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:18.041606  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:18.041677  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:18.067873  620659 cri.go:89] found id: ""
	I1216 05:29:18.067907  620659 logs.go:282] 0 containers: []
	W1216 05:29:18.067930  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:18.067942  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:18.068029  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:18.094781  620659 cri.go:89] found id: ""
	I1216 05:29:18.094808  620659 logs.go:282] 0 containers: []
	W1216 05:29:18.094817  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:18.094824  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:18.094884  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:18.125528  620659 cri.go:89] found id: ""
	I1216 05:29:18.125552  620659 logs.go:282] 0 containers: []
	W1216 05:29:18.125562  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:18.125590  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:18.125602  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:18.142298  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:18.142328  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:18.210312  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:18.210333  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:18.210345  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:18.240907  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:18.240941  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:18.273150  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:18.273183  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:20.840716  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:20.852055  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:20.852120  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:20.883284  620659 cri.go:89] found id: ""
	I1216 05:29:20.883307  620659 logs.go:282] 0 containers: []
	W1216 05:29:20.883316  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:20.883323  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:20.883411  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:20.908835  620659 cri.go:89] found id: ""
	I1216 05:29:20.908863  620659 logs.go:282] 0 containers: []
	W1216 05:29:20.908879  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:20.908886  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:20.908945  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:20.934251  620659 cri.go:89] found id: ""
	I1216 05:29:20.934273  620659 logs.go:282] 0 containers: []
	W1216 05:29:20.934282  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:20.934288  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:20.934345  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:20.968294  620659 cri.go:89] found id: ""
	I1216 05:29:20.968318  620659 logs.go:282] 0 containers: []
	W1216 05:29:20.968326  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:20.968332  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:20.968388  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:20.993450  620659 cri.go:89] found id: ""
	I1216 05:29:20.993473  620659 logs.go:282] 0 containers: []
	W1216 05:29:20.993481  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:20.993487  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:20.993549  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:21.023084  620659 cri.go:89] found id: ""
	I1216 05:29:21.023108  620659 logs.go:282] 0 containers: []
	W1216 05:29:21.023117  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:21.023123  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:21.023185  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:21.048202  620659 cri.go:89] found id: ""
	I1216 05:29:21.048224  620659 logs.go:282] 0 containers: []
	W1216 05:29:21.048232  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:21.048238  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:21.048295  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:21.077267  620659 cri.go:89] found id: ""
	I1216 05:29:21.077290  620659 logs.go:282] 0 containers: []
	W1216 05:29:21.077298  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:21.077307  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:21.077325  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:21.106457  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:21.106486  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:21.176694  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:21.176733  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:21.194244  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:21.194277  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:21.257448  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:21.257468  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:21.257480  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:23.789709  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:23.801595  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:23.801700  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:23.832989  620659 cri.go:89] found id: ""
	I1216 05:29:23.833016  620659 logs.go:282] 0 containers: []
	W1216 05:29:23.833024  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:23.833031  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:23.833122  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:23.869482  620659 cri.go:89] found id: ""
	I1216 05:29:23.869512  620659 logs.go:282] 0 containers: []
	W1216 05:29:23.869521  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:23.869526  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:23.869585  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:23.897659  620659 cri.go:89] found id: ""
	I1216 05:29:23.897690  620659 logs.go:282] 0 containers: []
	W1216 05:29:23.897698  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:23.897710  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:23.897771  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:23.923789  620659 cri.go:89] found id: ""
	I1216 05:29:23.923814  620659 logs.go:282] 0 containers: []
	W1216 05:29:23.923823  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:23.923831  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:23.923892  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:23.949740  620659 cri.go:89] found id: ""
	I1216 05:29:23.949770  620659 logs.go:282] 0 containers: []
	W1216 05:29:23.949780  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:23.949786  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:23.949847  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:23.974444  620659 cri.go:89] found id: ""
	I1216 05:29:23.974468  620659 logs.go:282] 0 containers: []
	W1216 05:29:23.974477  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:23.974484  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:23.974545  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:24.001190  620659 cri.go:89] found id: ""
	I1216 05:29:24.001271  620659 logs.go:282] 0 containers: []
	W1216 05:29:24.001287  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:24.001295  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:24.001376  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:24.030714  620659 cri.go:89] found id: ""
	I1216 05:29:24.030738  620659 logs.go:282] 0 containers: []
	W1216 05:29:24.030747  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:24.030756  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:24.030768  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:24.095276  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:24.095298  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:24.095311  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:24.126033  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:24.126070  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:24.157813  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:24.157840  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:24.226421  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:24.226459  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:26.743464  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:26.753295  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:26.753420  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:26.788559  620659 cri.go:89] found id: ""
	I1216 05:29:26.788586  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.788595  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:26.788602  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:26.788662  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:26.819346  620659 cri.go:89] found id: ""
	I1216 05:29:26.819370  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.819380  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:26.819386  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:26.819447  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:26.849416  620659 cri.go:89] found id: ""
	I1216 05:29:26.849451  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.849459  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:26.849465  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:26.849535  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:26.878142  620659 cri.go:89] found id: ""
	I1216 05:29:26.878178  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.878187  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:26.878194  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:26.878267  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:26.908414  620659 cri.go:89] found id: ""
	I1216 05:29:26.908450  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.908459  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:26.908465  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:26.908543  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:26.938061  620659 cri.go:89] found id: ""
	I1216 05:29:26.938091  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.938099  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:26.938106  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:26.938167  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:26.964249  620659 cri.go:89] found id: ""
	I1216 05:29:26.964281  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.964290  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:26.964296  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:26.964364  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:26.992277  620659 cri.go:89] found id: ""
	I1216 05:29:26.992350  620659 logs.go:282] 0 containers: []
	W1216 05:29:26.992373  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:26.992396  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:26.992436  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:27.058931  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:27.058971  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:27.074967  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:27.074994  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:27.139192  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:27.139259  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:27.139285  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:27.169363  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:27.169397  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:29.703846  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:29.714115  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:29.714187  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:29.743885  620659 cri.go:89] found id: ""
	I1216 05:29:29.743965  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.743996  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:29.744014  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:29.744104  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:29.769988  620659 cri.go:89] found id: ""
	I1216 05:29:29.770068  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.770092  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:29.770110  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:29.770200  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:29.795298  620659 cri.go:89] found id: ""
	I1216 05:29:29.795368  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.795391  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:29.795405  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:29.795480  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:29.833023  620659 cri.go:89] found id: ""
	I1216 05:29:29.833051  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.833060  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:29.833142  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:29.833208  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:29.863381  620659 cri.go:89] found id: ""
	I1216 05:29:29.863416  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.863425  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:29.863431  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:29.863502  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:29.893496  620659 cri.go:89] found id: ""
	I1216 05:29:29.893565  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.893591  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:29.893605  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:29.893744  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:29.921943  620659 cri.go:89] found id: ""
	I1216 05:29:29.921970  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.921979  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:29.921985  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:29.922044  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:29.946333  620659 cri.go:89] found id: ""
	I1216 05:29:29.946360  620659 logs.go:282] 0 containers: []
	W1216 05:29:29.946375  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:29.946385  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:29.946398  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:30.017424  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:30.017478  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:30.039142  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:30.039191  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:30.109557  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:30.109586  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:30.109604  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:30.140108  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:30.140145  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:32.674381  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:32.685057  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:32.685142  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:32.711046  620659 cri.go:89] found id: ""
	I1216 05:29:32.711070  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.711079  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:32.711086  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:32.711147  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:32.736011  620659 cri.go:89] found id: ""
	I1216 05:29:32.736036  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.736045  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:32.736051  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:32.736113  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:32.763906  620659 cri.go:89] found id: ""
	I1216 05:29:32.763934  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.763943  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:32.763949  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:32.764007  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:32.788822  620659 cri.go:89] found id: ""
	I1216 05:29:32.788850  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.788859  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:32.788865  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:32.788931  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:32.818458  620659 cri.go:89] found id: ""
	I1216 05:29:32.818486  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.818495  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:32.818501  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:32.818560  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:32.847900  620659 cri.go:89] found id: ""
	I1216 05:29:32.847928  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.847937  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:32.847943  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:32.848004  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:32.878349  620659 cri.go:89] found id: ""
	I1216 05:29:32.878384  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.878394  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:32.878400  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:32.878474  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:32.912021  620659 cri.go:89] found id: ""
	I1216 05:29:32.912060  620659 logs.go:282] 0 containers: []
	W1216 05:29:32.912071  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:32.912081  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:32.912093  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:32.978715  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:32.978757  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:32.995163  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:32.995204  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:33.063427  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:33.063451  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:33.063479  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:33.094092  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:33.094126  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:35.625928  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:35.636169  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:35.636236  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:35.662805  620659 cri.go:89] found id: ""
	I1216 05:29:35.662826  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.662834  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:35.662840  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:35.662898  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:35.689226  620659 cri.go:89] found id: ""
	I1216 05:29:35.689253  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.689267  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:35.689273  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:35.689330  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:35.715143  620659 cri.go:89] found id: ""
	I1216 05:29:35.715171  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.715180  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:35.715186  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:35.715247  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:35.738793  620659 cri.go:89] found id: ""
	I1216 05:29:35.738818  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.738827  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:35.738834  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:35.738893  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:35.768110  620659 cri.go:89] found id: ""
	I1216 05:29:35.768144  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.768153  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:35.768159  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:35.768222  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:35.795219  620659 cri.go:89] found id: ""
	I1216 05:29:35.795243  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.795252  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:35.795258  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:35.795322  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:35.829848  620659 cri.go:89] found id: ""
	I1216 05:29:35.829872  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.829881  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:35.829887  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:35.829949  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:35.866632  620659 cri.go:89] found id: ""
	I1216 05:29:35.866655  620659 logs.go:282] 0 containers: []
	W1216 05:29:35.866664  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:35.866672  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:35.866685  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:35.898696  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:35.898722  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:35.964962  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:35.965001  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:35.982149  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:35.982183  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:36.054792  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:36.054815  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:36.054828  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:38.585859  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:38.596215  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:38.596304  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:38.623875  620659 cri.go:89] found id: ""
	I1216 05:29:38.623901  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.623910  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:38.623916  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:38.623975  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:38.648507  620659 cri.go:89] found id: ""
	I1216 05:29:38.648536  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.648544  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:38.648550  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:38.648607  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:38.675804  620659 cri.go:89] found id: ""
	I1216 05:29:38.675833  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.675842  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:38.675848  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:38.675910  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:38.703214  620659 cri.go:89] found id: ""
	I1216 05:29:38.703241  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.703250  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:38.703256  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:38.703315  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:38.727817  620659 cri.go:89] found id: ""
	I1216 05:29:38.727841  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.727850  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:38.727855  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:38.727914  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:38.753546  620659 cri.go:89] found id: ""
	I1216 05:29:38.753569  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.753578  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:38.753584  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:38.753642  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:38.784467  620659 cri.go:89] found id: ""
	I1216 05:29:38.784492  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.784502  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:38.784508  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:38.784567  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:38.811069  620659 cri.go:89] found id: ""
	I1216 05:29:38.811092  620659 logs.go:282] 0 containers: []
	W1216 05:29:38.811101  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:38.811111  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:38.811122  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:38.885675  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:38.885716  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:38.902961  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:38.902991  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:38.974502  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:38.974568  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:38.974597  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:39.005666  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:39.005723  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:41.540583  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:41.551259  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:41.551330  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:41.579315  620659 cri.go:89] found id: ""
	I1216 05:29:41.579339  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.579347  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:41.579353  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:41.579419  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:41.611256  620659 cri.go:89] found id: ""
	I1216 05:29:41.611278  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.611287  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:41.611293  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:41.611352  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:41.659393  620659 cri.go:89] found id: ""
	I1216 05:29:41.659416  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.659425  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:41.659431  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:41.659489  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:41.695537  620659 cri.go:89] found id: ""
	I1216 05:29:41.695618  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.695642  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:41.695660  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:41.695766  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:41.732955  620659 cri.go:89] found id: ""
	I1216 05:29:41.732978  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.732987  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:41.732993  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:41.733052  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:41.761377  620659 cri.go:89] found id: ""
	I1216 05:29:41.761398  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.761406  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:41.761412  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:41.761472  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:41.791677  620659 cri.go:89] found id: ""
	I1216 05:29:41.791699  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.791707  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:41.791713  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:41.791785  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:41.831576  620659 cri.go:89] found id: ""
	I1216 05:29:41.831605  620659 logs.go:282] 0 containers: []
	W1216 05:29:41.831613  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:41.831622  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:41.831634  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:41.859882  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:41.859963  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:41.972935  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:41.973005  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:41.973036  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:42.022824  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:42.022914  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:42.067992  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:42.068075  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:44.652976  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:44.663032  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:44.663110  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:44.689091  620659 cri.go:89] found id: ""
	I1216 05:29:44.689114  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.689123  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:44.689129  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:44.689190  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:44.714400  620659 cri.go:89] found id: ""
	I1216 05:29:44.714427  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.714436  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:44.714442  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:44.714500  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:44.739447  620659 cri.go:89] found id: ""
	I1216 05:29:44.739474  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.739483  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:44.739489  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:44.739547  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:44.765631  620659 cri.go:89] found id: ""
	I1216 05:29:44.765664  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.765673  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:44.765679  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:44.765740  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:44.791612  620659 cri.go:89] found id: ""
	I1216 05:29:44.791689  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.791713  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:44.791732  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:44.791821  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:44.816055  620659 cri.go:89] found id: ""
	I1216 05:29:44.816129  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.816140  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:44.816147  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:44.816244  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:44.844243  620659 cri.go:89] found id: ""
	I1216 05:29:44.844316  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.844339  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:44.844357  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:44.844449  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:44.870117  620659 cri.go:89] found id: ""
	I1216 05:29:44.870187  620659 logs.go:282] 0 containers: []
	W1216 05:29:44.870213  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:44.870231  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:44.870259  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:44.899878  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:44.899908  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:44.968036  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:44.968073  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:44.984089  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:44.984169  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:45.077648  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:45.077745  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:45.077777  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:47.640424  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:47.650661  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:47.650732  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:47.677289  620659 cri.go:89] found id: ""
	I1216 05:29:47.677318  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.677326  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:47.677333  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:47.677391  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:47.704001  620659 cri.go:89] found id: ""
	I1216 05:29:47.704023  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.704033  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:47.704039  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:47.704096  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:47.730840  620659 cri.go:89] found id: ""
	I1216 05:29:47.730863  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.730871  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:47.730877  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:47.730934  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:47.755435  620659 cri.go:89] found id: ""
	I1216 05:29:47.755458  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.755466  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:47.755473  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:47.755530  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:47.784295  620659 cri.go:89] found id: ""
	I1216 05:29:47.784318  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.784326  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:47.784332  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:47.784392  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:47.811090  620659 cri.go:89] found id: ""
	I1216 05:29:47.811117  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.811126  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:47.811132  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:47.811191  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:47.837058  620659 cri.go:89] found id: ""
	I1216 05:29:47.837112  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.837127  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:47.837133  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:47.837190  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:47.863252  620659 cri.go:89] found id: ""
	I1216 05:29:47.863276  620659 logs.go:282] 0 containers: []
	W1216 05:29:47.863285  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:47.863293  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:47.863312  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:47.930429  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:47.930467  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:47.946864  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:47.946895  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:48.017053  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:48.017098  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:48.017112  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:48.049976  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:48.050011  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:50.589560  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:50.599924  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:50.600001  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:50.626460  620659 cri.go:89] found id: ""
	I1216 05:29:50.626488  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.626498  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:50.626504  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:50.626561  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:50.653393  620659 cri.go:89] found id: ""
	I1216 05:29:50.653416  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.653425  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:50.653430  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:50.653489  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:50.679861  620659 cri.go:89] found id: ""
	I1216 05:29:50.679888  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.679897  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:50.679903  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:50.679962  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:50.705489  620659 cri.go:89] found id: ""
	I1216 05:29:50.705512  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.705520  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:50.705527  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:50.705585  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:50.735283  620659 cri.go:89] found id: ""
	I1216 05:29:50.735306  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.735315  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:50.735321  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:50.735379  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:50.768407  620659 cri.go:89] found id: ""
	I1216 05:29:50.768432  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.768441  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:50.768453  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:50.768510  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:50.795001  620659 cri.go:89] found id: ""
	I1216 05:29:50.795024  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.795033  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:50.795039  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:50.795102  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:50.820710  620659 cri.go:89] found id: ""
	I1216 05:29:50.820735  620659 logs.go:282] 0 containers: []
	W1216 05:29:50.820743  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:50.820753  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:50.820764  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:50.887318  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:50.887354  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:50.903421  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:50.903449  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:50.970908  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:50.970974  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:50.971002  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:51.001988  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:51.002028  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:53.536954  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:53.546646  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:53.546717  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:53.572035  620659 cri.go:89] found id: ""
	I1216 05:29:53.572062  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.572071  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:53.572080  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:53.572139  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:53.596889  620659 cri.go:89] found id: ""
	I1216 05:29:53.596916  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.596925  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:53.596931  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:53.596990  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:53.621338  620659 cri.go:89] found id: ""
	I1216 05:29:53.621405  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.621430  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:53.621451  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:53.621535  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:53.646384  620659 cri.go:89] found id: ""
	I1216 05:29:53.646408  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.646417  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:53.646423  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:53.646479  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:53.677149  620659 cri.go:89] found id: ""
	I1216 05:29:53.677181  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.677190  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:53.677203  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:53.677264  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:53.703998  620659 cri.go:89] found id: ""
	I1216 05:29:53.704022  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.704031  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:53.704037  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:53.704141  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:53.729499  620659 cri.go:89] found id: ""
	I1216 05:29:53.729525  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.729535  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:53.729541  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:53.729613  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:53.754446  620659 cri.go:89] found id: ""
	I1216 05:29:53.754473  620659 logs.go:282] 0 containers: []
	W1216 05:29:53.754482  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:53.754491  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:53.754502  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:53.821228  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:53.821265  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:53.837595  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:53.837626  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:53.902545  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:53.902568  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:53.902581  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:53.932707  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:53.932744  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:56.463388  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:56.474368  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:56.474427  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:56.520129  620659 cri.go:89] found id: ""
	I1216 05:29:56.520155  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.520164  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:56.520170  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:56.520249  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:56.562673  620659 cri.go:89] found id: ""
	I1216 05:29:56.562696  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.562710  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:56.562716  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:56.562794  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:56.600236  620659 cri.go:89] found id: ""
	I1216 05:29:56.600266  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.600275  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:56.600281  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:56.600340  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:56.640456  620659 cri.go:89] found id: ""
	I1216 05:29:56.640484  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.640497  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:56.640504  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:56.640592  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:56.685661  620659 cri.go:89] found id: ""
	I1216 05:29:56.685685  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.685697  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:56.685709  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:56.685786  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:56.726930  620659 cri.go:89] found id: ""
	I1216 05:29:56.726952  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.726961  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:56.726976  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:56.727046  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:56.771465  620659 cri.go:89] found id: ""
	I1216 05:29:56.771487  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.771496  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:56.771502  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:56.771585  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:56.808926  620659 cri.go:89] found id: ""
	I1216 05:29:56.808959  620659 logs.go:282] 0 containers: []
	W1216 05:29:56.808968  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:56.808977  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:56.808989  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:56.856112  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:56.856238  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:56.951908  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:56.952009  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:56.972739  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:56.972821  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:57.088597  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:57.088687  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:57.088716  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:59.652975  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:29:59.665376  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:59.665442  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:59.700534  620659 cri.go:89] found id: ""
	I1216 05:29:59.700556  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.700564  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:29:59.700570  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:59.700630  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:59.744818  620659 cri.go:89] found id: ""
	I1216 05:29:59.744841  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.744850  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:29:59.744856  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:59.744917  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:59.775400  620659 cri.go:89] found id: ""
	I1216 05:29:59.775424  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.775432  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:29:59.775439  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:59.775503  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:59.828885  620659 cri.go:89] found id: ""
	I1216 05:29:59.828919  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.828927  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:29:59.828934  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:59.829048  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:59.856908  620659 cri.go:89] found id: ""
	I1216 05:29:59.856943  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.856952  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:59.856958  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:59.857016  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:59.883454  620659 cri.go:89] found id: ""
	I1216 05:29:59.883476  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.883484  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:29:59.883490  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:59.883559  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:59.909392  620659 cri.go:89] found id: ""
	I1216 05:29:59.909414  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.909422  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:59.909429  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:59.909493  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:59.935845  620659 cri.go:89] found id: ""
	I1216 05:29:59.935869  620659 logs.go:282] 0 containers: []
	W1216 05:29:59.935878  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:59.935887  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:59.935898  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:59.967073  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:29:59.967107  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:59.998217  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:59.998244  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:30:00.281114  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:30:00.281169  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:30:00.328473  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:30:00.328572  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:30:00.504626  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:30:03.005412  620659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:30:03.034162  620659 kubeadm.go:602] duration metric: took 4m3.889981376s to restartPrimaryControlPlane
	W1216 05:30:03.034237  620659 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 05:30:03.034302  620659 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 05:30:03.446546  620659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:30:03.459349  620659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:30:03.468927  620659 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:30:03.468993  620659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:30:03.477095  620659 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:30:03.477115  620659 kubeadm.go:158] found existing configuration files:
	
	I1216 05:30:03.477168  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:30:03.484903  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:30:03.484972  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:30:03.492920  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:30:03.500620  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:30:03.500691  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:30:03.508650  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:30:03.516385  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:30:03.516452  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:30:03.523771  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:30:03.531466  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:30:03.531532  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:30:03.538741  620659 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:30:03.575695  620659 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:30:03.576022  620659 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:30:03.649886  620659 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:30:03.649963  620659 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 05:30:03.650019  620659 kubeadm.go:319] OS: Linux
	I1216 05:30:03.650147  620659 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:30:03.650239  620659 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:30:03.650324  620659 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:30:03.650410  620659 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:30:03.650497  620659 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:30:03.650589  620659 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:30:03.650662  620659 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:30:03.650739  620659 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:30:03.650817  620659 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:30:03.721654  620659 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:30:03.721858  620659 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:30:03.721987  620659 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:30:03.734533  620659 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:30:03.738786  620659 out.go:252]   - Generating certificates and keys ...
	I1216 05:30:03.738885  620659 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:30:03.738955  620659 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:30:03.739038  620659 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:30:03.739102  620659 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:30:03.739176  620659 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:30:03.739233  620659 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:30:03.739300  620659 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:30:03.739364  620659 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:30:03.739440  620659 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:30:03.739516  620659 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:30:03.739557  620659 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:30:03.739618  620659 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:30:03.895035  620659 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:30:04.325982  620659 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:30:04.494394  620659 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:30:04.634145  620659 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:30:04.939020  620659 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:30:04.940161  620659 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:30:04.943220  620659 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:30:04.946949  620659 out.go:252]   - Booting up control plane ...
	I1216 05:30:04.947063  620659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:30:04.947146  620659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:30:04.949549  620659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:30:04.974504  620659 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:30:04.974616  620659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:30:04.984294  620659 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:30:04.984397  620659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:30:04.984437  620659 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:30:05.153892  620659 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:30:05.154015  620659 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:34:05.154370  620659 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000693915s
	I1216 05:34:05.154400  620659 kubeadm.go:319] 
	I1216 05:34:05.154458  620659 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:34:05.154492  620659 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:34:05.154597  620659 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:34:05.154603  620659 kubeadm.go:319] 
	I1216 05:34:05.154708  620659 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:34:05.154740  620659 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:34:05.154771  620659 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:34:05.154775  620659 kubeadm.go:319] 
	I1216 05:34:05.158105  620659 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 05:34:05.158568  620659 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:34:05.158686  620659 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:34:05.158926  620659 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 05:34:05.158933  620659 kubeadm.go:319] 
	I1216 05:34:05.159002  620659 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1216 05:34:05.159138  620659 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000693915s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000693915s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 05:34:05.159215  620659 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 05:34:05.584656  620659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:34:05.603667  620659 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:34:05.603731  620659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:34:05.613199  620659 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:34:05.613221  620659 kubeadm.go:158] found existing configuration files:
	
	I1216 05:34:05.613272  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:34:05.622344  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:34:05.622411  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:34:05.633930  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:34:05.643526  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:34:05.643592  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:34:05.655105  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:34:05.666056  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:34:05.666127  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:34:05.674235  620659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:34:05.687562  620659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:34:05.687638  620659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:34:05.697404  620659 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:34:05.760798  620659 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:34:05.761435  620659 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:34:05.869200  620659 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:34:05.869273  620659 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1216 05:34:05.869309  620659 kubeadm.go:319] OS: Linux
	I1216 05:34:05.869354  620659 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:34:05.869402  620659 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:34:05.869449  620659 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:34:05.869497  620659 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:34:05.869545  620659 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:34:05.869593  620659 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:34:05.869638  620659 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:34:05.869694  620659 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:34:05.869741  620659 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:34:05.969325  620659 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:34:05.969437  620659 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:34:05.969529  620659 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:34:05.979747  620659 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:34:05.983588  620659 out.go:252]   - Generating certificates and keys ...
	I1216 05:34:05.983684  620659 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:34:05.983749  620659 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:34:05.983827  620659 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:34:05.983887  620659 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:34:05.983957  620659 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:34:05.984419  620659 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:34:05.985738  620659 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:34:05.986164  620659 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:34:05.986697  620659 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:34:05.987088  620659 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:34:05.987557  620659 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:34:05.987624  620659 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:34:06.298539  620659 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:34:06.682934  620659 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:34:06.759712  620659 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:34:06.965683  620659 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:34:07.322442  620659 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:34:07.323558  620659 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:34:07.326802  620659 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:34:07.333003  620659 out.go:252]   - Booting up control plane ...
	I1216 05:34:07.333136  620659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:34:07.333215  620659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:34:07.334523  620659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:34:07.370348  620659 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:34:07.370684  620659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:34:07.378792  620659 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:34:07.379178  620659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:34:07.379401  620659 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:34:07.547071  620659 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:34:07.547200  620659 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:38:07.547989  620659 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000819446s
	I1216 05:38:07.548023  620659 kubeadm.go:319] 
	I1216 05:38:07.548080  620659 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:38:07.548114  620659 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:38:07.548219  620659 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:38:07.548224  620659 kubeadm.go:319] 
	I1216 05:38:07.548328  620659 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:38:07.548360  620659 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:38:07.548390  620659 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:38:07.548395  620659 kubeadm.go:319] 
	I1216 05:38:07.551687  620659 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 05:38:07.552114  620659 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:38:07.552227  620659 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:38:07.552468  620659 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 05:38:07.552477  620659 kubeadm.go:319] 
	I1216 05:38:07.552546  620659 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:38:07.552603  620659 kubeadm.go:403] duration metric: took 12m8.443615112s to StartCluster
	I1216 05:38:07.552640  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:38:07.552703  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:38:07.578953  620659 cri.go:89] found id: ""
	I1216 05:38:07.578980  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.578989  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:38:07.578996  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:38:07.579058  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:38:07.604978  620659 cri.go:89] found id: ""
	I1216 05:38:07.605004  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.605013  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:38:07.605019  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:38:07.605105  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:38:07.632302  620659 cri.go:89] found id: ""
	I1216 05:38:07.632328  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.632338  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:38:07.632344  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:38:07.632401  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:38:07.659496  620659 cri.go:89] found id: ""
	I1216 05:38:07.659521  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.659530  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:38:07.659536  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:38:07.659594  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:38:07.684191  620659 cri.go:89] found id: ""
	I1216 05:38:07.684219  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.684228  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:38:07.684235  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:38:07.684302  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:38:07.713554  620659 cri.go:89] found id: ""
	I1216 05:38:07.713576  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.713585  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:38:07.713591  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:38:07.713647  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:38:07.738708  620659 cri.go:89] found id: ""
	I1216 05:38:07.738732  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.738741  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:38:07.738747  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:38:07.738805  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:38:07.764136  620659 cri.go:89] found id: ""
	I1216 05:38:07.764165  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.764173  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:38:07.764184  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:38:07.764198  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:38:07.781101  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:38:07.781127  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:38:07.860734  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:38:07.860757  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:38:07.860774  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:38:07.897628  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:38:07.897661  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:38:07.928654  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:38:07.928722  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 05:38:07.999896  620659 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 05:38:07.999994  620659 out.go:285] * 
	* 
	W1216 05:38:08.000051  620659 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:38:08.000063  620659 out.go:285] * 
	* 
	W1216 05:38:08.002188  620659 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:38:08.009930  620659 out.go:203] 
	W1216 05:38:08.013874  620659 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:38:08.013928  620659 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 05:38:08.013955  620659 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 05:38:08.017258  620659 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-913873 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-913873 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-913873 version --output=json: exit status 1 (84.698287ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-16 05:38:08.46270081 +0000 UTC m=+5247.725141706
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-913873
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-913873:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a23bea1bb5a1d81c21ad58db1322bc664783510cb73ffcaa5183df2274286d4",
	        "Created": "2025-12-16T05:25:13.272155593Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620789,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:25:48.330878535Z",
	            "FinishedAt": "2025-12-16T05:25:47.331334059Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/7a23bea1bb5a1d81c21ad58db1322bc664783510cb73ffcaa5183df2274286d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a23bea1bb5a1d81c21ad58db1322bc664783510cb73ffcaa5183df2274286d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a23bea1bb5a1d81c21ad58db1322bc664783510cb73ffcaa5183df2274286d4/hosts",
	        "LogPath": "/var/lib/docker/containers/7a23bea1bb5a1d81c21ad58db1322bc664783510cb73ffcaa5183df2274286d4/7a23bea1bb5a1d81c21ad58db1322bc664783510cb73ffcaa5183df2274286d4-json.log",
	        "Name": "/kubernetes-upgrade-913873",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-913873:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-913873",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7a23bea1bb5a1d81c21ad58db1322bc664783510cb73ffcaa5183df2274286d4",
	                "LowerDir": "/var/lib/docker/overlay2/39ce3ef90b92a624e30f412eb2554073b6b21f1fb551bcd8f7e440ac1a7669f6-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/39ce3ef90b92a624e30f412eb2554073b6b21f1fb551bcd8f7e440ac1a7669f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/39ce3ef90b92a624e30f412eb2554073b6b21f1fb551bcd8f7e440ac1a7669f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/39ce3ef90b92a624e30f412eb2554073b6b21f1fb551bcd8f7e440ac1a7669f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-913873",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-913873/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-913873",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-913873",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-913873",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b12559c67a12fd52dac8799db17649ebf7c7e3965e61f5e84ba1b03096c1bd6",
	            "SandboxKey": "/var/run/docker/netns/6b12559c67a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-913873": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:75:98:17:a3:ef",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b76ff67965679ee5d7f304f154878e867b1195ef28cda9d99e388fda76b8f80",
	                    "EndpointID": "8314dcb227ec919bed15f5ab2de6be5c1a2cbb45c840d4ce0d5a78cc96a34825",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-913873",
	                        "7a23bea1bb5a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-913873 -n kubernetes-upgrade-913873
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-913873 -n kubernetes-upgrade-913873: exit status 2 (341.976308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-913873 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-989845 sudo systemctl status docker --all --full --no-pager                                                                                                                                                     │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo systemctl cat docker --no-pager                                                                                                                                                                     │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo cat /etc/docker/daemon.json                                                                                                                                                                         │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo docker system info                                                                                                                                                                                  │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                 │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                 │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                            │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                      │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo cri-dockerd --version                                                                                                                                                                               │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                 │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo systemctl cat containerd --no-pager                                                                                                                                                                 │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                          │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo cat /etc/containerd/config.toml                                                                                                                                                                     │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo containerd config dump                                                                                                                                                                              │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ ssh     │ -p cilium-989845 sudo crio config                                                                                                                                                                                         │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │                     │
	│ delete  │ -p cilium-989845                                                                                                                                                                                                          │ cilium-989845            │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │ 16 Dec 25 05:33 UTC │
	│ start   │ -p force-systemd-env-288672 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-288672 │ jenkins │ v1.37.0 │ 16 Dec 25 05:33 UTC │ 16 Dec 25 05:34 UTC │
	│ delete  │ -p force-systemd-env-288672                                                                                                                                                                                               │ force-systemd-env-288672 │ jenkins │ v1.37.0 │ 16 Dec 25 05:34 UTC │ 16 Dec 25 05:34 UTC │
	│ start   │ -p cert-expiration-096436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-096436   │ jenkins │ v1.37.0 │ 16 Dec 25 05:34 UTC │ 16 Dec 25 05:34 UTC │
	│ start   │ -p cert-expiration-096436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                 │ cert-expiration-096436   │ jenkins │ v1.37.0 │ 16 Dec 25 05:37 UTC │ 16 Dec 25 05:37 UTC │
	│ delete  │ -p cert-expiration-096436                                                                                                                                                                                                 │ cert-expiration-096436   │ jenkins │ v1.37.0 │ 16 Dec 25 05:37 UTC │ 16 Dec 25 05:37 UTC │
	│ start   │ -p cert-options-888180 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-888180      │ jenkins │ v1.37.0 │ 16 Dec 25 05:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:37:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:37:59.655277  659706 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:37:59.655392  659706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:37:59.655396  659706 out.go:374] Setting ErrFile to fd 2...
	I1216 05:37:59.655399  659706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:37:59.655672  659706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:37:59.656116  659706 out.go:368] Setting JSON to false
	I1216 05:37:59.656945  659706 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15626,"bootTime":1765847854,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 05:37:59.657000  659706 start.go:143] virtualization:  
	I1216 05:37:59.660629  659706 out.go:179] * [cert-options-888180] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 05:37:59.665189  659706 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 05:37:59.665343  659706 notify.go:221] Checking for updates...
	I1216 05:37:59.671890  659706 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:37:59.675071  659706 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 05:37:59.678113  659706 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 05:37:59.681180  659706 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 05:37:59.684242  659706 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:37:59.687837  659706 config.go:182] Loaded profile config "kubernetes-upgrade-913873": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 05:37:59.687964  659706 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:37:59.725204  659706 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 05:37:59.725345  659706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:37:59.820394  659706 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 05:37:59.811033381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:37:59.820504  659706 docker.go:319] overlay module found
	I1216 05:37:59.823699  659706 out.go:179] * Using the docker driver based on user configuration
	I1216 05:37:59.826652  659706 start.go:309] selected driver: docker
	I1216 05:37:59.826670  659706 start.go:927] validating driver "docker" against <nil>
	I1216 05:37:59.826686  659706 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:37:59.827488  659706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:37:59.888145  659706 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 05:37:59.877882193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:37:59.888295  659706 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 05:37:59.888501  659706 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 05:37:59.891543  659706 out.go:179] * Using Docker driver with root privileges
	I1216 05:37:59.894624  659706 cni.go:84] Creating CNI manager for ""
	I1216 05:37:59.894689  659706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:37:59.894697  659706 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:37:59.894783  659706 start.go:353] cluster config:
	{Name:cert-options-888180 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-888180 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInter
val:1m0s}
	I1216 05:37:59.898091  659706 out.go:179] * Starting "cert-options-888180" primary control-plane node in "cert-options-888180" cluster
	I1216 05:37:59.901006  659706 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 05:37:59.904177  659706 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
	I1216 05:37:59.907189  659706 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:37:59.907239  659706 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1216 05:37:59.907252  659706 cache.go:65] Caching tarball of preloaded images
	I1216 05:37:59.907361  659706 preload.go:238] Found /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 05:37:59.907373  659706 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:37:59.907512  659706 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/cert-options-888180/config.json ...
	I1216 05:37:59.907529  659706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/cert-options-888180/config.json: {Name:mkd525f3aa547dc6343a5755856acf78d9e8adb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:37:59.907696  659706 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 05:37:59.936776  659706 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 05:37:59.936787  659706 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
	I1216 05:37:59.936799  659706 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:37:59.936829  659706 start.go:360] acquireMachinesLock for cert-options-888180: {Name:mkadecf4fd7e2d0274ccbb44537072831690f24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:37:59.936927  659706 start.go:364] duration metric: took 84.669µs to acquireMachinesLock for "cert-options-888180"
	I1216 05:37:59.936951  659706 start.go:93] Provisioning new machine with config: &{Name:cert-options-888180 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-options-888180 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:37:59.937019  659706 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:37:59.940515  659706 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:37:59.940740  659706 start.go:159] libmachine.API.Create for "cert-options-888180" (driver="docker")
	I1216 05:37:59.940773  659706 client.go:173] LocalClient.Create starting
	I1216 05:37:59.940844  659706 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem
	I1216 05:37:59.940877  659706 main.go:143] libmachine: Decoding PEM data...
	I1216 05:37:59.940891  659706 main.go:143] libmachine: Parsing certificate...
	I1216 05:37:59.940944  659706 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem
	I1216 05:37:59.940959  659706 main.go:143] libmachine: Decoding PEM data...
	I1216 05:37:59.940969  659706 main.go:143] libmachine: Parsing certificate...
	I1216 05:37:59.941381  659706 cli_runner.go:164] Run: docker network inspect cert-options-888180 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:37:59.957233  659706 cli_runner.go:211] docker network inspect cert-options-888180 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:37:59.957300  659706 network_create.go:284] running [docker network inspect cert-options-888180] to gather additional debugging logs...
	I1216 05:37:59.957314  659706 cli_runner.go:164] Run: docker network inspect cert-options-888180
	W1216 05:37:59.973467  659706 cli_runner.go:211] docker network inspect cert-options-888180 returned with exit code 1
	I1216 05:37:59.973488  659706 network_create.go:287] error running [docker network inspect cert-options-888180]: docker network inspect cert-options-888180: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-options-888180 not found
	I1216 05:37:59.973498  659706 network_create.go:289] output of [docker network inspect cert-options-888180]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-options-888180 not found
	
	** /stderr **
	I1216 05:37:59.973607  659706 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:37:59.989877  659706 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66a1741c73ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:45:79:86:27:66} reservation:<nil>}
	I1216 05:37:59.990198  659706 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d27f32a0237f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:74:e9:6d:a1:43} reservation:<nil>}
	I1216 05:37:59.990411  659706 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5beb726a92d1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:21:8b:0e:44:88} reservation:<nil>}
	I1216 05:37:59.990657  659706 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5b76ff679656 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:16:ea:b1:9d:62} reservation:<nil>}
	I1216 05:37:59.991020  659706 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1c780}
	I1216 05:37:59.991039  659706 network_create.go:124] attempt to create docker network cert-options-888180 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 05:37:59.991095  659706 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-options-888180 cert-options-888180
	I1216 05:38:00.199142  659706 network_create.go:108] docker network cert-options-888180 192.168.85.0/24 created
	I1216 05:38:00.199176  659706 kic.go:121] calculated static IP "192.168.85.2" for the "cert-options-888180" container
	I1216 05:38:00.199289  659706 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:38:00.221993  659706 cli_runner.go:164] Run: docker volume create cert-options-888180 --label name.minikube.sigs.k8s.io=cert-options-888180 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:38:00.273416  659706 oci.go:103] Successfully created a docker volume cert-options-888180
	I1216 05:38:00.273525  659706 cli_runner.go:164] Run: docker run --rm --name cert-options-888180-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-888180 --entrypoint /usr/bin/test -v cert-options-888180:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
	I1216 05:38:00.865745  659706 oci.go:107] Successfully prepared a docker volume cert-options-888180
	I1216 05:38:00.865798  659706 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:38:00.865806  659706 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 05:38:00.865888  659706 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-options-888180:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 05:38:07.547989  620659 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000819446s
	I1216 05:38:07.548023  620659 kubeadm.go:319] 
	I1216 05:38:07.548080  620659 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:38:07.548114  620659 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:38:07.548219  620659 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:38:07.548224  620659 kubeadm.go:319] 
	I1216 05:38:07.548328  620659 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:38:07.548360  620659 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:38:07.548390  620659 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:38:07.548395  620659 kubeadm.go:319] 
	I1216 05:38:07.551687  620659 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1216 05:38:07.552114  620659 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:38:07.552227  620659 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:38:07.552468  620659 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 05:38:07.552477  620659 kubeadm.go:319] 
	I1216 05:38:07.552546  620659 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:38:07.552603  620659 kubeadm.go:403] duration metric: took 12m8.443615112s to StartCluster
	I1216 05:38:07.552640  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:38:07.552703  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:38:07.578953  620659 cri.go:89] found id: ""
	I1216 05:38:07.578980  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.578989  620659 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:38:07.578996  620659 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:38:07.579058  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:38:07.604978  620659 cri.go:89] found id: ""
	I1216 05:38:07.605004  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.605013  620659 logs.go:284] No container was found matching "etcd"
	I1216 05:38:07.605019  620659 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:38:07.605105  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:38:07.632302  620659 cri.go:89] found id: ""
	I1216 05:38:07.632328  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.632338  620659 logs.go:284] No container was found matching "coredns"
	I1216 05:38:07.632344  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:38:07.632401  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:38:07.659496  620659 cri.go:89] found id: ""
	I1216 05:38:07.659521  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.659530  620659 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:38:07.659536  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:38:07.659594  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:38:07.684191  620659 cri.go:89] found id: ""
	I1216 05:38:07.684219  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.684228  620659 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:38:07.684235  620659 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:38:07.684302  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:38:07.713554  620659 cri.go:89] found id: ""
	I1216 05:38:07.713576  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.713585  620659 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:38:07.713591  620659 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:38:07.713647  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:38:07.738708  620659 cri.go:89] found id: ""
	I1216 05:38:07.738732  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.738741  620659 logs.go:284] No container was found matching "kindnet"
	I1216 05:38:07.738747  620659 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:38:07.738805  620659 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:38:07.764136  620659 cri.go:89] found id: ""
	I1216 05:38:07.764165  620659 logs.go:282] 0 containers: []
	W1216 05:38:07.764173  620659 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:38:07.764184  620659 logs.go:123] Gathering logs for dmesg ...
	I1216 05:38:07.764198  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:38:07.781101  620659 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:38:07.781127  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:38:07.860734  620659 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:38:07.860757  620659 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:38:07.860774  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:38:07.897628  620659 logs.go:123] Gathering logs for container status ...
	I1216 05:38:07.897661  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:38:07.928654  620659 logs.go:123] Gathering logs for kubelet ...
	I1216 05:38:07.928722  620659 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 05:38:07.999896  620659 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 05:38:07.999994  620659 out.go:285] * 
	W1216 05:38:08.000051  620659 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:38:08.000063  620659 out.go:285] * 
	W1216 05:38:08.002188  620659 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:38:08.009930  620659 out.go:203] 
	W1216 05:38:08.013874  620659 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000819446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:38:08.013928  620659 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 05:38:08.013955  620659 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 05:38:08.017258  620659 out.go:203] 
	
	
	==> CRI-O <==
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314074188Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314107755Z" level=info msg="Starting seccomp notifier watcher"
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314149544Z" level=info msg="Create NRI interface"
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314241049Z" level=info msg="built-in NRI default validator is disabled"
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314250518Z" level=info msg="runtime interface created"
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314263269Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314269661Z" level=info msg="runtime interface starting up..."
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.31427492Z" level=info msg="starting plugins..."
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314286998Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 16 05:25:54 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:25:54.314346831Z" level=info msg="No systemd watchdog enabled"
	Dec 16 05:25:54 kubernetes-upgrade-913873 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 16 05:30:03 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:30:03.730478141Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=28e3e34e-9201-42bb-9b27-30431d506e0a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:30:03 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:30:03.731201882Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=2b016aa4-205c-4254-9c22-dd36d7860a26 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:30:03 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:30:03.731696304Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=06b66f73-41fd-4e79-9098-651b2c38d085 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:30:03 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:30:03.732184965Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=631fd6b9-fba9-4f07-ba45-ff8a7621d70c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:30:03 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:30:03.732648322Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=3a012e0b-ecfa-436c-b355-fa10a8f89715 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:30:03 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:30:03.733163568Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=75b0f701-772e-4c7f-8f38-ca600db4966a name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:30:03 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:30:03.733632685Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a394dc0b-6c7c-48d1-935b-1618e3b45168 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:34:05 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:34:05.974039264Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=466ea687-3bf9-4e0c-b87e-8e0542769e06 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:34:05 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:34:05.975128974Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=1e7f08b8-b247-4830-8304-eff8ef8bf574 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:34:05 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:34:05.975762408Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c715c789-b35d-4934-b6bd-d7ea5cc226b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:34:05 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:34:05.976314078Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c28ac75b-7473-4fdc-abb7-a5b8fdef2672 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:34:05 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:34:05.976912681Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0c8c5de9-e6d1-4a16-9f14-bc9d9d930d2c name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:34:05 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:34:05.97762811Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=8d0a9c23-5fba-43a0-ae26-d4e0581cda95 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 05:34:05 kubernetes-upgrade-913873 crio[614]: time="2025-12-16T05:34:05.978223538Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=7446cdf9-f0fc-4e31-ac76-8924138e6d5a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 05:02] overlayfs: idmapped layers are currently not supported
	[  +4.043407] overlayfs: idmapped layers are currently not supported
	[Dec16 05:03] overlayfs: idmapped layers are currently not supported
	[Dec16 05:04] overlayfs: idmapped layers are currently not supported
	[Dec16 05:05] overlayfs: idmapped layers are currently not supported
	[Dec16 05:10] overlayfs: idmapped layers are currently not supported
	[Dec16 05:11] overlayfs: idmapped layers are currently not supported
	[Dec16 05:12] overlayfs: idmapped layers are currently not supported
	[Dec16 05:13] overlayfs: idmapped layers are currently not supported
	[Dec16 05:14] overlayfs: idmapped layers are currently not supported
	[Dec16 05:16] overlayfs: idmapped layers are currently not supported
	[ +25.166334] overlayfs: idmapped layers are currently not supported
	[  +0.467202] overlayfs: idmapped layers are currently not supported
	[Dec16 05:17] overlayfs: idmapped layers are currently not supported
	[ +18.764288] overlayfs: idmapped layers are currently not supported
	[Dec16 05:18] overlayfs: idmapped layers are currently not supported
	[ +26.071219] overlayfs: idmapped layers are currently not supported
	[Dec16 05:20] overlayfs: idmapped layers are currently not supported
	[Dec16 05:21] overlayfs: idmapped layers are currently not supported
	[Dec16 05:23] overlayfs: idmapped layers are currently not supported
	[  +3.507219] overlayfs: idmapped layers are currently not supported
	[Dec16 05:25] overlayfs: idmapped layers are currently not supported
	[Dec16 05:33] overlayfs: idmapped layers are currently not supported
	[ +47.371816] overlayfs: idmapped layers are currently not supported
	[Dec16 05:34] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 05:38:09 up  4:20,  0 user,  load average: 1.80, 1.69, 1.82
	Linux kubernetes-upgrade-913873 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:38:07 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:38:07 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 05:38:07 kubernetes-upgrade-913873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:38:07 kubernetes-upgrade-913873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:38:07 kubernetes-upgrade-913873 kubelet[12070]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 05:38:07 kubernetes-upgrade-913873 kubelet[12070]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 05:38:07 kubernetes-upgrade-913873 kubelet[12070]: E1216 05:38:07.885650   12070 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:38:07 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:38:07 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:38:08 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 05:38:08 kubernetes-upgrade-913873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:38:08 kubernetes-upgrade-913873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:38:08 kubernetes-upgrade-913873 kubelet[12092]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 05:38:08 kubernetes-upgrade-913873 kubelet[12092]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 05:38:08 kubernetes-upgrade-913873 kubelet[12092]: E1216 05:38:08.640558   12092 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:38:08 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:38:08 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:38:09 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 05:38:09 kubernetes-upgrade-913873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:38:09 kubernetes-upgrade-913873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:38:09 kubernetes-upgrade-913873 kubelet[12166]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 05:38:09 kubernetes-upgrade-913873 kubelet[12166]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 16 05:38:09 kubernetes-upgrade-913873 kubelet[12166]: E1216 05:38:09.420294   12166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:38:09 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:38:09 kubernetes-upgrade-913873 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-913873 -n kubernetes-upgrade-913873
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-913873 -n kubernetes-upgrade-913873: exit status 2 (431.406869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-913873" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-913873" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-913873
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-913873: (2.522034178s)
--- FAIL: TestKubernetesUpgrade (785.81s)

                                                
                                    
x
+
TestPause/serial/Pause (8.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-879168 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-879168 --alsologtostderr -v=5: exit status 80 (2.326902896s)

                                                
                                                
-- stdout --
	* Pausing node pause-879168 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:24:55.302498  614359 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:24:55.303091  614359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:24:55.303109  614359 out.go:374] Setting ErrFile to fd 2...
	I1216 05:24:55.303116  614359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:24:55.303736  614359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:24:55.304040  614359 out.go:368] Setting JSON to false
	I1216 05:24:55.304069  614359 mustload.go:66] Loading cluster: pause-879168
	I1216 05:24:55.304537  614359 config.go:182] Loaded profile config "pause-879168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:24:55.305410  614359 cli_runner.go:164] Run: docker container inspect pause-879168 --format={{.State.Status}}
	I1216 05:24:55.331785  614359 host.go:66] Checking if "pause-879168" exists ...
	I1216 05:24:55.332138  614359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:24:55.428580  614359 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2025-12-16 05:24:55.417957019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:24:55.429378  614359 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765836331-22158/minikube-v1.37.0-1765836331-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765836331-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-879168 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1216 05:24:55.433408  614359 out.go:179] * Pausing node pause-879168 ... 
	I1216 05:24:55.438614  614359 host.go:66] Checking if "pause-879168" exists ...
	I1216 05:24:55.438966  614359 ssh_runner.go:195] Run: systemctl --version
	I1216 05:24:55.439013  614359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:55.464767  614359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:55.598279  614359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:24:55.616273  614359 pause.go:52] kubelet running: true
	I1216 05:24:55.616355  614359 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:24:55.869603  614359 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:24:55.869692  614359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:24:55.955181  614359 cri.go:89] found id: "d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a"
	I1216 05:24:55.955210  614359 cri.go:89] found id: "6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092"
	I1216 05:24:55.955216  614359 cri.go:89] found id: "292fc57a6b2f05e0366768d4818f2f82aa3678cab45473b441a002b1c2edf832"
	I1216 05:24:55.955220  614359 cri.go:89] found id: "0ecb3cb231904dc7f5c6ab5a546ad2edc08955e1ecbc8c04bffec5e146eb5865"
	I1216 05:24:55.955224  614359 cri.go:89] found id: "e815c7290489a4f8e21f38a344e67da2bf330eddc5d3f56582952cc63031840b"
	I1216 05:24:55.955227  614359 cri.go:89] found id: "6daae05879a8bfbcb59c78c8282efa943812c98cbe80bf9f862169baef894f22"
	I1216 05:24:55.955230  614359 cri.go:89] found id: "6b8e81e70d40373c6eb323cdec44bd51871ee3925462b7f451a590587032fedb"
	I1216 05:24:55.955260  614359 cri.go:89] found id: "d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309"
	I1216 05:24:55.955264  614359 cri.go:89] found id: "6d8570293bc3b615ca8558c9c245c34413db3307ea4c9dc1156de6be82366c43"
	I1216 05:24:55.955271  614359 cri.go:89] found id: "3e66591d8ee86b0879aeecb1a61f768173550e0389177364fa184b2694aff00f"
	I1216 05:24:55.955278  614359 cri.go:89] found id: "c0d025670a91a4d8a61391711a080e93e875e808cbaa29712ba6feb5636a12cc"
	I1216 05:24:55.955282  614359 cri.go:89] found id: "d8bd8959d629eec53cb3c82761a3da996cdd881c9d140609854bbf22b3702a51"
	I1216 05:24:55.955284  614359 cri.go:89] found id: "2e109aacd16433537ebfcc0e8f0693e4255203df82bdfdcb738267fffab893f0"
	I1216 05:24:55.955287  614359 cri.go:89] found id: "80e7a81bd9e8176865d8a2b2254d322cff4d032e109644dc1ff242823b19f2c2"
	I1216 05:24:55.955290  614359 cri.go:89] found id: ""
	I1216 05:24:55.955355  614359 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:24:55.979117  614359 retry.go:31] will retry after 306.605882ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:24:55Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:24:56.286704  614359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:24:56.310935  614359 pause.go:52] kubelet running: false
	I1216 05:24:56.311013  614359 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:24:56.510588  614359 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:24:56.510680  614359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:24:56.678121  614359 cri.go:89] found id: "d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a"
	I1216 05:24:56.678145  614359 cri.go:89] found id: "6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092"
	I1216 05:24:56.678150  614359 cri.go:89] found id: "292fc57a6b2f05e0366768d4818f2f82aa3678cab45473b441a002b1c2edf832"
	I1216 05:24:56.678154  614359 cri.go:89] found id: "0ecb3cb231904dc7f5c6ab5a546ad2edc08955e1ecbc8c04bffec5e146eb5865"
	I1216 05:24:56.678161  614359 cri.go:89] found id: "e815c7290489a4f8e21f38a344e67da2bf330eddc5d3f56582952cc63031840b"
	I1216 05:24:56.678165  614359 cri.go:89] found id: "6daae05879a8bfbcb59c78c8282efa943812c98cbe80bf9f862169baef894f22"
	I1216 05:24:56.678168  614359 cri.go:89] found id: "6b8e81e70d40373c6eb323cdec44bd51871ee3925462b7f451a590587032fedb"
	I1216 05:24:56.678172  614359 cri.go:89] found id: "d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309"
	I1216 05:24:56.678174  614359 cri.go:89] found id: "6d8570293bc3b615ca8558c9c245c34413db3307ea4c9dc1156de6be82366c43"
	I1216 05:24:56.678182  614359 cri.go:89] found id: "3e66591d8ee86b0879aeecb1a61f768173550e0389177364fa184b2694aff00f"
	I1216 05:24:56.678186  614359 cri.go:89] found id: "c0d025670a91a4d8a61391711a080e93e875e808cbaa29712ba6feb5636a12cc"
	I1216 05:24:56.678189  614359 cri.go:89] found id: "d8bd8959d629eec53cb3c82761a3da996cdd881c9d140609854bbf22b3702a51"
	I1216 05:24:56.678192  614359 cri.go:89] found id: "2e109aacd16433537ebfcc0e8f0693e4255203df82bdfdcb738267fffab893f0"
	I1216 05:24:56.678195  614359 cri.go:89] found id: "80e7a81bd9e8176865d8a2b2254d322cff4d032e109644dc1ff242823b19f2c2"
	I1216 05:24:56.678198  614359 cri.go:89] found id: ""
	I1216 05:24:56.678248  614359 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:24:56.697382  614359 retry.go:31] will retry after 473.320257ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:24:56Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:24:57.170996  614359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:24:57.185752  614359 pause.go:52] kubelet running: false
	I1216 05:24:57.185869  614359 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1216 05:24:57.372026  614359 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1216 05:24:57.372157  614359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1216 05:24:57.486635  614359 cri.go:89] found id: "d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a"
	I1216 05:24:57.486705  614359 cri.go:89] found id: "6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092"
	I1216 05:24:57.486725  614359 cri.go:89] found id: "292fc57a6b2f05e0366768d4818f2f82aa3678cab45473b441a002b1c2edf832"
	I1216 05:24:57.486743  614359 cri.go:89] found id: "0ecb3cb231904dc7f5c6ab5a546ad2edc08955e1ecbc8c04bffec5e146eb5865"
	I1216 05:24:57.486762  614359 cri.go:89] found id: "e815c7290489a4f8e21f38a344e67da2bf330eddc5d3f56582952cc63031840b"
	I1216 05:24:57.486793  614359 cri.go:89] found id: "6daae05879a8bfbcb59c78c8282efa943812c98cbe80bf9f862169baef894f22"
	I1216 05:24:57.486812  614359 cri.go:89] found id: "6b8e81e70d40373c6eb323cdec44bd51871ee3925462b7f451a590587032fedb"
	I1216 05:24:57.486830  614359 cri.go:89] found id: "d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309"
	I1216 05:24:57.486848  614359 cri.go:89] found id: "6d8570293bc3b615ca8558c9c245c34413db3307ea4c9dc1156de6be82366c43"
	I1216 05:24:57.486898  614359 cri.go:89] found id: "3e66591d8ee86b0879aeecb1a61f768173550e0389177364fa184b2694aff00f"
	I1216 05:24:57.486923  614359 cri.go:89] found id: "c0d025670a91a4d8a61391711a080e93e875e808cbaa29712ba6feb5636a12cc"
	I1216 05:24:57.486951  614359 cri.go:89] found id: "d8bd8959d629eec53cb3c82761a3da996cdd881c9d140609854bbf22b3702a51"
	I1216 05:24:57.486968  614359 cri.go:89] found id: "2e109aacd16433537ebfcc0e8f0693e4255203df82bdfdcb738267fffab893f0"
	I1216 05:24:57.486988  614359 cri.go:89] found id: "80e7a81bd9e8176865d8a2b2254d322cff4d032e109644dc1ff242823b19f2c2"
	I1216 05:24:57.487005  614359 cri.go:89] found id: ""
	I1216 05:24:57.487087  614359 ssh_runner.go:195] Run: sudo runc list -f json
	I1216 05:24:57.505253  614359 out.go:203] 
	W1216 05:24:57.512595  614359 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:24:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:24:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1216 05:24:57.512621  614359 out.go:285] * 
	* 
	W1216 05:24:57.518779  614359 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:24:57.529134  614359 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-879168 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-879168
helpers_test.go:244: (dbg) docker inspect pause-879168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d",
	        "Created": "2025-12-16T05:23:02.56832195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604962,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:23:02.660328851Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/hostname",
	        "HostsPath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/hosts",
	        "LogPath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d-json.log",
	        "Name": "/pause-879168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-879168:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-879168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d",
	                "LowerDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-879168",
	                "Source": "/var/lib/docker/volumes/pause-879168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-879168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-879168",
	                "name.minikube.sigs.k8s.io": "pause-879168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "884a060584b97a8ed2dce1b8bea8f58126e9ddad032c88fa9b2be221532f6f2c",
	            "SandboxKey": "/var/run/docker/netns/884a060584b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-879168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:28:08:b3:59:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d2fa6e76c2595ea0a91e6f4498896df8093ebd206776c444cfd3ab193a4a65c",
	                    "EndpointID": "dfdf6877162ad069739adfeb807270536d5c5a0513aad42f413f6159d53dee9b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-879168",
	                        "68b6e326a9a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-879168 -n pause-879168
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-879168 -n pause-879168: exit status 2 (376.622575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-879168 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-879168 logs -n 25: (1.905932934s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-232318 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --cancel-scheduled                                                                           │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │ 16 Dec 25 05:21 UTC │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │ 16 Dec 25 05:22 UTC │
	│ delete  │ -p scheduled-stop-232318                                                                                              │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:22 UTC │
	│ start   │ -p insufficient-storage-725109 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-725109 │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │                     │
	│ delete  │ -p insufficient-storage-725109                                                                                        │ insufficient-storage-725109 │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:22 UTC │
	│ start   │ -p NoKubernetes-868033 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │                     │
	│ start   │ -p pause-879168 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-879168                │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p NoKubernetes-868033 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:23 UTC │
	│ start   │ -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:23 UTC │ 16 Dec 25 05:24 UTC │
	│ delete  │ -p NoKubernetes-868033                                                                                                │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ ssh     │ -p NoKubernetes-868033 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │                     │
	│ stop    │ -p NoKubernetes-868033                                                                                                │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p NoKubernetes-868033 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p pause-879168 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-879168                │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ ssh     │ -p NoKubernetes-868033 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │                     │
	│ delete  │ -p NoKubernetes-868033                                                                                                │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p missing-upgrade-508979 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-508979      │ jenkins │ v1.35.0 │ 16 Dec 25 05:24 UTC │                     │
	│ pause   │ -p pause-879168 --alsologtostderr -v=5                                                                                │ pause-879168                │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:24:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:24:30.662586  613493 out.go:345] Setting OutFile to fd 1 ...
	I1216 05:24:30.662719  613493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 05:24:30.662723  613493 out.go:358] Setting ErrFile to fd 2...
	I1216 05:24:30.662727  613493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 05:24:30.662953  613493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:24:30.663340  613493 out.go:352] Setting JSON to false
	I1216 05:24:30.664273  613493 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14817,"bootTime":1765847854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 05:24:30.664340  613493 start.go:139] virtualization:  
	I1216 05:24:30.668437  613493 out.go:177] * [missing-upgrade-508979] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I1216 05:24:30.671635  613493 out.go:177]   - MINIKUBE_LOCATION=22158
	I1216 05:24:30.671689  613493 notify.go:220] Checking for updates...
	I1216 05:24:30.678688  613493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:24:30.681798  613493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 05:24:30.685007  613493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 05:24:30.688190  613493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 05:24:30.694746  613493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:24:30.698307  613493 config.go:182] Loaded profile config "pause-879168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:24:30.698406  613493 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 05:24:30.743247  613493 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1216 05:24:30.743360  613493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:24:30.748162  613493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/last_update_check: {Name:mk828cc8382d2363b57ddbd6e2a4114ce0b4dd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:30.751825  613493 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1216 05:24:30.755377  613493 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1216 05:24:30.848296  613493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 05:24:30.837272435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:24:30.848390  613493 docker.go:318] overlay module found
	I1216 05:24:30.851537  613493 out.go:177] * Using the docker driver based on user configuration
	I1216 05:24:30.854458  613493 start.go:297] selected driver: docker
	I1216 05:24:30.854469  613493 start.go:901] validating driver "docker" against <nil>
	I1216 05:24:30.854483  613493 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:24:30.855225  613493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:24:30.937263  613493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 05:24:30.927536695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:24:30.937471  613493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 05:24:30.937733  613493 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 05:24:30.940711  613493 out.go:177] * Using Docker driver with root privileges
	I1216 05:24:30.943590  613493 cni.go:84] Creating CNI manager for ""
	I1216 05:24:30.943653  613493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:24:30.943660  613493 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:24:30.943744  613493 start.go:340] cluster config:
	{Name:missing-upgrade-508979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-508979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:24:30.946718  613493 out.go:177] * Starting "missing-upgrade-508979" primary control-plane node in "missing-upgrade-508979" cluster
	I1216 05:24:30.949627  613493 cache.go:121] Beginning downloading kic base image for docker with crio
	I1216 05:24:30.952922  613493 out.go:177] * Pulling base image v0.0.46 ...
	I1216 05:24:30.955848  613493 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 05:24:30.955936  613493 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1216 05:24:30.972301  613493 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I1216 05:24:30.972460  613493 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I1216 05:24:30.972503  613493 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I1216 05:24:31.007887  613493 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1216 05:24:31.007903  613493 cache.go:56] Caching tarball of preloaded images
	I1216 05:24:31.008064  613493 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 05:24:31.011493  613493 out.go:177] * Downloading Kubernetes v1.32.0 preload ...
	I1216 05:24:30.344645  612363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:24:30.344663  612363 machine.go:97] duration metric: took 6.486716767s to provisionDockerMachine
	I1216 05:24:30.344696  612363 start.go:293] postStartSetup for "pause-879168" (driver="docker")
	I1216 05:24:30.344709  612363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:24:30.344781  612363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:24:30.344902  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.375995  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.492361  612363 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:24:30.496232  612363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:24:30.496256  612363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:24:30.496267  612363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 05:24:30.496322  612363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 05:24:30.496397  612363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 05:24:30.496501  612363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:24:30.510973  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 05:24:30.546897  612363 start.go:296] duration metric: took 202.154559ms for postStartSetup
	I1216 05:24:30.547018  612363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:24:30.547117  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.580239  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.683264  612363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:24:30.688740  612363 fix.go:56] duration metric: took 6.851204041s for fixHost
	I1216 05:24:30.688764  612363 start.go:83] releasing machines lock for "pause-879168", held for 6.851255348s
	I1216 05:24:30.688832  612363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-879168
	I1216 05:24:30.716701  612363 ssh_runner.go:195] Run: cat /version.json
	I1216 05:24:30.716773  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.717188  612363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:24:30.717253  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.762322  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.776302  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.968533  612363 ssh_runner.go:195] Run: systemctl --version
	I1216 05:24:30.975102  612363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:24:31.021641  612363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:24:31.026306  612363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:24:31.026380  612363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:24:31.034489  612363 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:24:31.034513  612363 start.go:496] detecting cgroup driver to use...
	I1216 05:24:31.034545  612363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:24:31.034593  612363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:24:31.049839  612363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:24:31.063612  612363 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:24:31.063690  612363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:24:31.080437  612363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:24:31.093438  612363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:24:31.232917  612363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:24:31.363660  612363 docker.go:234] disabling docker service ...
	I1216 05:24:31.363726  612363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:24:31.379621  612363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:24:31.393750  612363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:24:31.547729  612363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:24:31.683442  612363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:24:31.701434  612363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:24:31.718507  612363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:24:31.718568  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.738071  612363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 05:24:31.738160  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.747410  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.756345  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.766135  612363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:24:31.777230  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.786884  612363 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.795997  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.807162  612363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:24:31.818962  612363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:24:31.829444  612363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:24:31.970891  612363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:24:32.384359  612363 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:24:32.384436  612363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:24:32.388832  612363 start.go:564] Will wait 60s for crictl version
	I1216 05:24:32.388900  612363 ssh_runner.go:195] Run: which crictl
	I1216 05:24:32.393748  612363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:24:32.429673  612363 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:24:32.429773  612363 ssh_runner.go:195] Run: crio --version
	I1216 05:24:32.462262  612363 ssh_runner.go:195] Run: crio --version
	I1216 05:24:32.501095  612363 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:24:32.504034  612363 cli_runner.go:164] Run: docker network inspect pause-879168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:24:32.560528  612363 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:24:32.565609  612363 kubeadm.go:884] updating cluster {Name:pause-879168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-879168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:24:32.565750  612363 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:24:32.565801  612363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:24:32.607568  612363 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:24:32.607589  612363 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:24:32.607645  612363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:24:32.642279  612363 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:24:32.642300  612363 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:24:32.642307  612363 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 05:24:32.642421  612363 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-879168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-879168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:24:32.642499  612363 ssh_runner.go:195] Run: crio config
	I1216 05:24:32.711930  612363 cni.go:84] Creating CNI manager for ""
	I1216 05:24:32.712007  612363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:24:32.712042  612363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:24:32.712094  612363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-879168 NodeName:pause-879168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:24:32.712283  612363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-879168"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:24:32.712401  612363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:24:32.721800  612363 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:24:32.721913  612363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:24:32.729887  612363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 05:24:32.743389  612363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:24:32.756933  612363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 05:24:32.771316  612363 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:24:32.776113  612363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:24:32.949145  612363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:24:32.965352  612363 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168 for IP: 192.168.76.2
	I1216 05:24:32.965371  612363 certs.go:195] generating shared ca certs ...
	I1216 05:24:32.965387  612363 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:32.965554  612363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 05:24:32.965623  612363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 05:24:32.965641  612363 certs.go:257] generating profile certs ...
	I1216 05:24:32.965733  612363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.key
	I1216 05:24:32.965799  612363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/apiserver.key.5384c97b
	I1216 05:24:32.965841  612363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/proxy-client.key
	I1216 05:24:32.965957  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 05:24:32.965986  612363 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 05:24:32.965994  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:24:32.966020  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 05:24:32.966042  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:24:32.966065  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 05:24:32.966115  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 05:24:32.966756  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:24:32.988925  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:24:33.021839  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:24:33.048954  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:24:33.070053  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:24:33.090617  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:24:33.110737  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:24:33.131017  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:24:33.153449  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 05:24:33.174909  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:24:33.194992  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 05:24:33.218710  612363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:24:33.233502  612363 ssh_runner.go:195] Run: openssl version
	I1216 05:24:33.240431  612363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.248987  612363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 05:24:33.257703  612363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.262329  612363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.262391  612363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.304934  612363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:24:33.313351  612363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.321429  612363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 05:24:33.333921  612363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.338410  612363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.338549  612363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.384590  612363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:24:33.392912  612363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.401223  612363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:24:33.410037  612363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.414413  612363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.414555  612363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.456099  612363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:24:33.464498  612363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:24:33.468927  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:24:33.510874  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:24:33.552660  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:24:33.594051  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:24:33.636225  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:24:33.679116  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:24:33.758162  612363 kubeadm.go:401] StartCluster: {Name:pause-879168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-879168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:24:33.758336  612363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:24:33.758437  612363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:24:33.834111  612363 cri.go:89] found id: "d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309"
	I1216 05:24:33.834196  612363 cri.go:89] found id: "6d8570293bc3b615ca8558c9c245c34413db3307ea4c9dc1156de6be82366c43"
	I1216 05:24:33.834213  612363 cri.go:89] found id: "3e66591d8ee86b0879aeecb1a61f768173550e0389177364fa184b2694aff00f"
	I1216 05:24:33.834229  612363 cri.go:89] found id: "c0d025670a91a4d8a61391711a080e93e875e808cbaa29712ba6feb5636a12cc"
	I1216 05:24:33.834263  612363 cri.go:89] found id: "d8bd8959d629eec53cb3c82761a3da996cdd881c9d140609854bbf22b3702a51"
	I1216 05:24:33.834285  612363 cri.go:89] found id: "2e109aacd16433537ebfcc0e8f0693e4255203df82bdfdcb738267fffab893f0"
	I1216 05:24:33.834303  612363 cri.go:89] found id: "80e7a81bd9e8176865d8a2b2254d322cff4d032e109644dc1ff242823b19f2c2"
	I1216 05:24:33.834321  612363 cri.go:89] found id: ""
	I1216 05:24:33.834405  612363 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:24:33.855034  612363 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:24:33Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:24:33.855158  612363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:24:33.887496  612363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:24:33.887569  612363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:24:33.887662  612363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:24:33.897298  612363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:24:33.897921  612363 kubeconfig.go:125] found "pause-879168" server: "https://192.168.76.2:8443"
	I1216 05:24:33.899061  612363 kapi.go:59] client config for pause-879168: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:24:33.899673  612363 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 05:24:33.899715  612363 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 05:24:33.899796  612363 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 05:24:33.899825  612363 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 05:24:33.899843  612363 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 05:24:33.900179  612363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:24:33.909837  612363 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 05:24:33.909914  612363 kubeadm.go:602] duration metric: took 22.326346ms to restartPrimaryControlPlane
	I1216 05:24:33.909939  612363 kubeadm.go:403] duration metric: took 151.788009ms to StartCluster
	I1216 05:24:33.909986  612363 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:33.910060  612363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 05:24:33.910782  612363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:33.911277  612363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:24:33.911669  612363 config.go:182] Loaded profile config "pause-879168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:24:33.911754  612363 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:24:33.923093  612363 out.go:179] * Verifying Kubernetes components...
	I1216 05:24:33.923253  612363 out.go:179] * Enabled addons: 
	I1216 05:24:31.014458  613493 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 05:24:31.097125  613493 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d3dc3b83b826438926b7b91af837ed7b -> /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1216 05:24:33.932039  612363 addons.go:530] duration metric: took 20.27861ms for enable addons: enabled=[]
	I1216 05:24:33.932184  612363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:24:34.727944  612363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:24:34.750514  612363 node_ready.go:35] waiting up to 6m0s for node "pause-879168" to be "Ready" ...
	I1216 05:24:37.490761  613493 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I1216 05:24:37.490775  613493 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I1216 05:24:38.243273  613493 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 05:24:38.243448  613493 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 05:24:40.173795  613493 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 05:24:40.173912  613493 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/missing-upgrade-508979/config.json ...
	I1216 05:24:40.173937  613493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/missing-upgrade-508979/config.json: {Name:mkd1727fdfb27a771a55ed04579d60062c7c0da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:42.063702  612363 node_ready.go:49] node "pause-879168" is "Ready"
	I1216 05:24:42.063732  612363 node_ready.go:38] duration metric: took 7.313187368s for node "pause-879168" to be "Ready" ...
	I1216 05:24:42.063747  612363 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:24:42.063820  612363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:24:42.105470  612363 api_server.go:72] duration metric: took 8.19412301s to wait for apiserver process to appear ...
	I1216 05:24:42.105510  612363 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:24:42.105530  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:42.231169  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:42.231261  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:42.605647  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:42.644449  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:42.644617  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:43.105658  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:43.137880  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:43.137977  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:43.606167  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:43.635990  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:43.636087  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:44.105641  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:44.122200  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:44.122279  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:44.605648  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:44.621632  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 05:24:44.623304  612363 api_server.go:141] control plane version: v1.34.2
	I1216 05:24:44.623383  612363 api_server.go:131] duration metric: took 2.517855003s to wait for apiserver health ...
	I1216 05:24:44.623407  612363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:24:44.633735  612363 system_pods.go:59] 7 kube-system pods found
	I1216 05:24:44.633846  612363 system_pods.go:61] "coredns-66bc5c9577-bz4lq" [043ed348-b26d-4228-942e-88494a373c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:24:44.633874  612363 system_pods.go:61] "etcd-pause-879168" [b1c55721-1051-4e78-a67e-9503b65225b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:24:44.633894  612363 system_pods.go:61] "kindnet-dc7d6" [c8b5f04f-9213-46c2-bd06-e330b1668b3d] Running
	I1216 05:24:44.633931  612363 system_pods.go:61] "kube-apiserver-pause-879168" [ccbce79a-a32b-44c5-9faf-438f9f887e93] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:24:44.633959  612363 system_pods.go:61] "kube-controller-manager-pause-879168" [ff071a5d-668d-440c-9956-b3a41041cfdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:24:44.633986  612363 system_pods.go:61] "kube-proxy-f2xxq" [ee80a4c8-c171-4039-a5c1-ae20319deaf1] Running
	I1216 05:24:44.634018  612363 system_pods.go:61] "kube-scheduler-pause-879168" [5709dd8a-ff9a-4e27-9f11-104c657d37b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:24:44.634043  612363 system_pods.go:74] duration metric: took 10.613259ms to wait for pod list to return data ...
	I1216 05:24:44.634064  612363 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:24:44.642332  612363 default_sa.go:45] found service account: "default"
	I1216 05:24:44.642415  612363 default_sa.go:55] duration metric: took 8.331446ms for default service account to be created ...
	I1216 05:24:44.642441  612363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:24:44.647414  612363 system_pods.go:86] 7 kube-system pods found
	I1216 05:24:44.647500  612363 system_pods.go:89] "coredns-66bc5c9577-bz4lq" [043ed348-b26d-4228-942e-88494a373c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:24:44.647527  612363 system_pods.go:89] "etcd-pause-879168" [b1c55721-1051-4e78-a67e-9503b65225b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:24:44.647562  612363 system_pods.go:89] "kindnet-dc7d6" [c8b5f04f-9213-46c2-bd06-e330b1668b3d] Running
	I1216 05:24:44.647586  612363 system_pods.go:89] "kube-apiserver-pause-879168" [ccbce79a-a32b-44c5-9faf-438f9f887e93] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:24:44.647607  612363 system_pods.go:89] "kube-controller-manager-pause-879168" [ff071a5d-668d-440c-9956-b3a41041cfdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:24:44.647626  612363 system_pods.go:89] "kube-proxy-f2xxq" [ee80a4c8-c171-4039-a5c1-ae20319deaf1] Running
	I1216 05:24:44.647666  612363 system_pods.go:89] "kube-scheduler-pause-879168" [5709dd8a-ff9a-4e27-9f11-104c657d37b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:24:44.647688  612363 system_pods.go:126] duration metric: took 5.226937ms to wait for k8s-apps to be running ...
	I1216 05:24:44.647724  612363 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:24:44.647811  612363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:24:44.690972  612363 system_svc.go:56] duration metric: took 43.252593ms WaitForService to wait for kubelet
	I1216 05:24:44.691054  612363 kubeadm.go:587] duration metric: took 10.779715079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:24:44.691088  612363 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:24:44.704871  612363 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 05:24:44.704951  612363 node_conditions.go:123] node cpu capacity is 2
	I1216 05:24:44.704979  612363 node_conditions.go:105] duration metric: took 13.868104ms to run NodePressure ...
	I1216 05:24:44.705018  612363 start.go:242] waiting for startup goroutines ...
	I1216 05:24:44.705041  612363 start.go:247] waiting for cluster config update ...
	I1216 05:24:44.705090  612363 start.go:256] writing updated cluster config ...
	I1216 05:24:44.705442  612363 ssh_runner.go:195] Run: rm -f paused
	I1216 05:24:44.717619  612363 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:24:44.718292  612363 kapi.go:59] client config for pause-879168: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:24:44.744423  612363 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bz4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:46.257170  612363 pod_ready.go:94] pod "coredns-66bc5c9577-bz4lq" is "Ready"
	I1216 05:24:46.257245  612363 pod_ready.go:86] duration metric: took 1.512746999s for pod "coredns-66bc5c9577-bz4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:46.260105  612363 pod_ready.go:83] waiting for pod "etcd-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:24:48.269196  612363 pod_ready.go:104] pod "etcd-pause-879168" is not "Ready", error: <nil>
	W1216 05:24:50.765718  612363 pod_ready.go:104] pod "etcd-pause-879168" is not "Ready", error: <nil>
	I1216 05:24:52.266051  612363 pod_ready.go:94] pod "etcd-pause-879168" is "Ready"
	I1216 05:24:52.266127  612363 pod_ready.go:86] duration metric: took 6.005952213s for pod "etcd-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:52.268764  612363 pod_ready.go:83] waiting for pod "kube-apiserver-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:52.273646  612363 pod_ready.go:94] pod "kube-apiserver-pause-879168" is "Ready"
	I1216 05:24:52.273676  612363 pod_ready.go:86] duration metric: took 4.88345ms for pod "kube-apiserver-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:52.275806  612363 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:24:54.281150  612363 pod_ready.go:104] pod "kube-controller-manager-pause-879168" is not "Ready", error: <nil>
	I1216 05:24:54.782402  612363 pod_ready.go:94] pod "kube-controller-manager-pause-879168" is "Ready"
	I1216 05:24:54.782428  612363 pod_ready.go:86] duration metric: took 2.506595314s for pod "kube-controller-manager-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:54.786325  612363 pod_ready.go:83] waiting for pod "kube-proxy-f2xxq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:54.792579  612363 pod_ready.go:94] pod "kube-proxy-f2xxq" is "Ready"
	I1216 05:24:54.792655  612363 pod_ready.go:86] duration metric: took 6.305529ms for pod "kube-proxy-f2xxq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:54.795668  612363 pod_ready.go:83] waiting for pod "kube-scheduler-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:55.064929  612363 pod_ready.go:94] pod "kube-scheduler-pause-879168" is "Ready"
	I1216 05:24:55.064962  612363 pod_ready.go:86] duration metric: took 269.271553ms for pod "kube-scheduler-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:55.064975  612363 pod_ready.go:40] duration metric: took 10.347271328s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:24:55.146719  612363 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 05:24:55.151867  612363 out.go:179] * Done! kubectl is now configured to use "pause-879168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.165864498Z" level=info msg="Started container" PID=2387 containerID=0ecb3cb231904dc7f5c6ab5a546ad2edc08955e1ecbc8c04bffec5e146eb5865 description=kube-system/etcd-pause-879168/etcd id=26f2e24a-1438-44e5-9565-75dea39aa55b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad6a660e6a103a08a7eb2deffe94059769c068fcc5def3929c833e65380bb591
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.178752465Z" level=info msg="Started container" PID=2400 containerID=292fc57a6b2f05e0366768d4818f2f82aa3678cab45473b441a002b1c2edf832 description=kube-system/kindnet-dc7d6/kindnet-cni id=c494f42d-5f25-4e83-a319-93beaa5f3c43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a1dc035379f6c8641fdd60b31fc19c443a6f0c53bf0d1c7538c69fbf0b4a779
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.189832625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.190588277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.242641589Z" level=info msg="Created container 6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092: kube-system/coredns-66bc5c9577-bz4lq/coredns" id=7e5c5f02-d62f-45ea-9f1e-38309d6080f4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.243411427Z" level=info msg="Starting container: 6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092" id=b1a62bee-c222-4272-bd59-73ffcdff5930 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.245381221Z" level=info msg="Started container" PID=2443 containerID=6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092 description=kube-system/coredns-66bc5c9577-bz4lq/coredns id=b1a62bee-c222-4272-bd59-73ffcdff5930 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4dda7b121b7cd3d16d36abb3bcb791c3e037c34e624ae6bfc469a92ba0e69250
	Dec 16 05:24:35 pause-879168 crio[2115]: time="2025-12-16T05:24:35.153404671Z" level=info msg="Created container d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a: kube-system/kube-proxy-f2xxq/kube-proxy" id=aa54f715-848d-4a4e-a127-44f7471f81a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:24:35 pause-879168 crio[2115]: time="2025-12-16T05:24:35.154617Z" level=info msg="Starting container: d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a" id=6546d864-1c20-4041-b2ae-df5ffc53b084 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:24:35 pause-879168 crio[2115]: time="2025-12-16T05:24:35.157751926Z" level=info msg="Started container" PID=2446 containerID=d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a description=kube-system/kube-proxy-f2xxq/kube-proxy id=6546d864-1c20-4041-b2ae-df5ffc53b084 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6772a0aa353c2009df4b08b88131ee8d4cf1eb71f0bf77d431b0ce5860fa6021
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.748956663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.75545284Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.755622507Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.755704059Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.769329658Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.769518698Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.769619983Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.779440708Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.779726119Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.779809295Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.789309057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.789566619Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.789733939Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.797924871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.798096458Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d02fe2c02c60f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   24 seconds ago       Running             kube-proxy                1                   6772a0aa353c2       kube-proxy-f2xxq                       kube-system
	6dcef43081a8a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   4dda7b121b7cd       coredns-66bc5c9577-bz4lq               kube-system
	292fc57a6b2f0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   0a1dc035379f6       kindnet-dc7d6                          kube-system
	0ecb3cb231904       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   24 seconds ago       Running             etcd                      1                   ad6a660e6a103       etcd-pause-879168                      kube-system
	e815c7290489a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   24 seconds ago       Running             kube-scheduler            1                   0beacfc985bd4       kube-scheduler-pause-879168            kube-system
	6daae05879a8b       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   24 seconds ago       Running             kube-controller-manager   1                   5d830a872fa5c       kube-controller-manager-pause-879168   kube-system
	6b8e81e70d403       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   933759dd9eb06       kube-apiserver-pause-879168            kube-system
	d1c7aee14d104       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   4dda7b121b7cd       coredns-66bc5c9577-bz4lq               kube-system
	6d8570293bc3b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   0a1dc035379f6       kindnet-dc7d6                          kube-system
	3e66591d8ee86       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   6772a0aa353c2       kube-proxy-f2xxq                       kube-system
	c0d025670a91a       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   933759dd9eb06       kube-apiserver-pause-879168            kube-system
	d8bd8959d629e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   5d830a872fa5c       kube-controller-manager-pause-879168   kube-system
	2e109aacd1643       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   0beacfc985bd4       kube-scheduler-pause-879168            kube-system
	80e7a81bd9e81       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   ad6a660e6a103       etcd-pause-879168                      kube-system
	
	
	==> coredns [6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56272 - 48153 "HINFO IN 6667063183447398688.7611637045416865144. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019291457s
	
	
	==> coredns [d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59557 - 26781 "HINFO IN 2066334530245165092.2205259519758289492. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025914832s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-879168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-879168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=pause-879168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:23:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-879168
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:24:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:23:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:23:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:23:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:24:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-879168
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b01d95696b577408f2b2782693c8bc0
	  System UUID:                9ffec5f8-a07d-409c-8e82-bddcfcb65e99
	  Boot ID:                    e72ece1f-d416-4c20-8564-468e8b5f7888
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bz4lq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-879168                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-dc7d6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-879168             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-879168    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-f2xxq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-879168             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 96s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-879168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-879168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s (x8 over 96s)  kubelet          Node pause-879168 status is now: NodeHasSufficientPID
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-879168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-879168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-879168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-879168 event: Registered Node pause-879168 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-879168 status is now: NodeReady
	  Normal   RegisteredNode           10s                node-controller  Node pause-879168 event: Registered Node pause-879168 in Controller
	
	
	==> dmesg <==
	[Dec16 04:58] overlayfs: idmapped layers are currently not supported
	[  +2.957541] overlayfs: idmapped layers are currently not supported
	[Dec16 04:59] overlayfs: idmapped layers are currently not supported
	[Dec16 05:01] overlayfs: idmapped layers are currently not supported
	[Dec16 05:02] overlayfs: idmapped layers are currently not supported
	[  +4.043407] overlayfs: idmapped layers are currently not supported
	[Dec16 05:03] overlayfs: idmapped layers are currently not supported
	[Dec16 05:04] overlayfs: idmapped layers are currently not supported
	[Dec16 05:05] overlayfs: idmapped layers are currently not supported
	[Dec16 05:10] overlayfs: idmapped layers are currently not supported
	[Dec16 05:11] overlayfs: idmapped layers are currently not supported
	[Dec16 05:12] overlayfs: idmapped layers are currently not supported
	[Dec16 05:13] overlayfs: idmapped layers are currently not supported
	[Dec16 05:14] overlayfs: idmapped layers are currently not supported
	[Dec16 05:16] overlayfs: idmapped layers are currently not supported
	[ +25.166334] overlayfs: idmapped layers are currently not supported
	[  +0.467202] overlayfs: idmapped layers are currently not supported
	[Dec16 05:17] overlayfs: idmapped layers are currently not supported
	[ +18.764288] overlayfs: idmapped layers are currently not supported
	[Dec16 05:18] overlayfs: idmapped layers are currently not supported
	[ +26.071219] overlayfs: idmapped layers are currently not supported
	[Dec16 05:20] overlayfs: idmapped layers are currently not supported
	[Dec16 05:21] overlayfs: idmapped layers are currently not supported
	[Dec16 05:23] overlayfs: idmapped layers are currently not supported
	[  +3.507219] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ecb3cb231904dc7f5c6ab5a546ad2edc08955e1ecbc8c04bffec5e146eb5865] <==
	{"level":"warn","ts":"2025-12-16T05:24:37.801394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.828467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.844599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.868225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.883592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.937699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.978050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.016331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.058125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.112505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.147455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.246043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.294731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.388546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.584622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.706112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.745194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.838385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.966572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.058950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.157252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.226601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.294284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.374291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.576753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	
	
	==> etcd [80e7a81bd9e8176865d8a2b2254d322cff4d032e109644dc1ff242823b19f2c2] <==
	{"level":"warn","ts":"2025-12-16T05:23:28.468565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.496134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.534682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.561869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.599183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.613293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.756038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T05:24:25.079148Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-16T05:24:25.079208Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-879168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-16T05:24:25.079307Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T05:24:25.357325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T05:24:25.357526Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T05:24:25.357594Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357607Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357672Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-16T05:24:25.357700Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-12-16T05:24:25.357703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357675Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357728Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T05:24:25.357736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T05:24:25.357747Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-16T05:24:25.360965Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-16T05:24:25.361050Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T05:24:25.361117Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-16T05:24:25.361126Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-879168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 05:24:59 up  4:07,  0 user,  load average: 3.46, 2.02, 1.72
	Linux pause-879168 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [292fc57a6b2f05e0366768d4818f2f82aa3678cab45473b441a002b1c2edf832] <==
	I1216 05:24:34.363026       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:24:34.363396       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 05:24:34.363569       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:24:34.363612       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:24:34.363649       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:24:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:24:34.747053       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:24:34.747755       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:24:34.752416       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:24:34.753613       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:24:42.353003       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:24:42.353151       1 metrics.go:72] Registering metrics
	I1216 05:24:42.353263       1 controller.go:711] "Syncing nftables rules"
	I1216 05:24:44.745205       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:24:44.747838       1 main.go:301] handling current node
	I1216 05:24:54.744963       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:24:54.745007       1 main.go:301] handling current node
	
	
	==> kindnet [6d8570293bc3b615ca8558c9c245c34413db3307ea4c9dc1156de6be82366c43] <==
	I1216 05:23:40.431896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:23:40.517225       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 05:23:40.517502       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:23:40.517548       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:23:40.517581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:23:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:23:40.718506       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:23:40.718581       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:23:40.718613       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:23:40.719769       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 05:24:10.719279       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1216 05:24:10.719393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 05:24:10.719484       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 05:24:10.720778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1216 05:24:12.220821       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:24:12.220919       1 metrics.go:72] Registering metrics
	I1216 05:24:12.221025       1 controller.go:711] "Syncing nftables rules"
	I1216 05:24:20.725148       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:24:20.725203       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6b8e81e70d40373c6eb323cdec44bd51871ee3925462b7f451a590587032fedb] <==
	I1216 05:24:41.963516       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 05:24:42.105394       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 05:24:42.105614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 05:24:42.105910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:24:42.117741       1 aggregator.go:171] initial CRD sync complete...
	I1216 05:24:42.117842       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 05:24:42.117875       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:24:42.117905       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:24:42.118448       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:24:42.150841       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 05:24:42.151953       1 policy_source.go:240] refreshing policies
	I1216 05:24:42.159379       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 05:24:42.159539       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:24:42.172462       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:24:42.184885       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 05:24:42.188039       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 05:24:42.188232       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:24:42.198792       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 05:24:42.193256       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:24:42.246641       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 05:24:42.288671       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 05:24:42.290480       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1216 05:24:42.328652       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:24:42.663962       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:24:46.743503       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-apiserver [c0d025670a91a4d8a61391711a080e93e875e808cbaa29712ba6feb5636a12cc] <==
	W1216 05:24:25.110846       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.110945       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111024       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111097       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111180       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111265       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112069       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112265       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112401       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112481       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112523       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112578       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112640       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112682       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112784       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112788       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112846       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112892       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112885       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112940       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112962       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112984       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.113031       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.113034       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.113114       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6daae05879a8bfbcb59c78c8282efa943812c98cbe80bf9f862169baef894f22] <==
	I1216 05:24:48.232896       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 05:24:48.233550       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 05:24:48.233646       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 05:24:48.234489       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 05:24:48.234612       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 05:24:48.234656       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 05:24:48.234735       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 05:24:48.240662       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 05:24:48.243172       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 05:24:48.246410       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:24:48.269491       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:24:48.273865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:24:48.273977       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:24:48.274066       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:24:48.274141       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:24:48.274175       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:24:48.274203       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:24:48.281227       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 05:24:48.281637       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 05:24:48.281779       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 05:24:48.281874       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:24:48.281909       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 05:24:48.281943       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 05:24:48.288407       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 05:24:48.298439       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-controller-manager [d8bd8959d629eec53cb3c82761a3da996cdd881c9d140609854bbf22b3702a51] <==
	I1216 05:23:38.311934       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 05:23:38.316431       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:23:38.311952       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 05:23:38.311970       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 05:23:38.311991       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:23:38.318061       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 05:23:38.318181       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 05:23:38.318272       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-879168"
	I1216 05:23:38.318355       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1216 05:23:38.318402       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 05:23:38.323757       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 05:23:38.331090       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 05:23:38.337431       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 05:23:38.345502       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 05:23:38.349296       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-879168" podCIDRs=["10.244.0.0/24"]
	I1216 05:23:38.358534       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 05:23:38.359457       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 05:23:38.359568       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 05:23:38.360311       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 05:23:38.361918       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:23:38.365146       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 05:23:38.365272       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 05:23:38.365809       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 05:23:38.387169       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:24:23.327766       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3e66591d8ee86b0879aeecb1a61f768173550e0389177364fa184b2694aff00f] <==
	I1216 05:23:40.360083       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:23:40.473331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:23:40.574478       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:23:40.574595       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 05:23:40.574699       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:23:40.672953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:23:40.673009       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:23:40.679954       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:23:40.680335       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:23:40.680537       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:23:40.682051       1 config.go:200] "Starting service config controller"
	I1216 05:23:40.682123       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:23:40.682167       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:23:40.682196       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:23:40.682232       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:23:40.682261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:23:40.682967       1 config.go:309] "Starting node config controller"
	I1216 05:23:40.689584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:23:40.689677       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:23:40.782377       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:23:40.782490       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:23:40.782521       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a] <==
	I1216 05:24:35.896662       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:24:40.174956       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:24:42.491601       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:24:42.515087       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 05:24:42.585328       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:24:43.340872       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:24:43.342174       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:24:43.382347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:24:43.382819       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:24:43.391188       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:24:43.434425       1 config.go:200] "Starting service config controller"
	I1216 05:24:43.434522       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:24:43.435361       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:24:43.456693       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:24:43.456453       1 config.go:309] "Starting node config controller"
	I1216 05:24:43.468207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:24:43.468662       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:24:43.453420       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:24:43.469487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:24:43.563708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:24:43.585219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:24:43.614176       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2e109aacd16433537ebfcc0e8f0693e4255203df82bdfdcb738267fffab893f0] <==
	I1216 05:23:29.568546       1 serving.go:386] Generated self-signed cert in-memory
	I1216 05:23:33.094455       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:23:33.094493       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:23:33.102803       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 05:23:33.102924       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 05:23:33.102992       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:23:33.103031       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:23:33.103072       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:23:33.103129       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:23:33.103271       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:23:33.103345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:23:33.209695       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:23:33.209743       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 05:23:33.209830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:25.079750       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1216 05:24:25.079778       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1216 05:24:25.079799       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 05:24:25.079826       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:25.079846       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1216 05:24:25.079860       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:25.080141       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1216 05:24:25.080167       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e815c7290489a4f8e21f38a344e67da2bf330eddc5d3f56582952cc63031840b] <==
	I1216 05:24:41.710119       1 serving.go:386] Generated self-signed cert in-memory
	I1216 05:24:43.315174       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:24:43.315210       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:24:43.361802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:24:43.361960       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 05:24:43.368432       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 05:24:43.362793       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:24:43.385658       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:43.386318       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:43.386829       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:43.386880       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:43.469156       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 05:24:43.486954       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:43.501525       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 05:24:33 pause-879168 kubelet[1325]: I1216 05:24:33.926903    1325 scope.go:117] "RemoveContainer" containerID="d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.927651    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66cd1c4e25b8653d72022362819b5204" pod="kube-system/kube-scheduler-pause-879168"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.927982    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ced2c465da15e66896d67b58bacd7e98" pod="kube-system/kube-controller-manager-pause-879168"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.928304    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0e11e4f5f3b2f19031ae2b14521bbeb9" pod="kube-system/kube-apiserver-pause-879168"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.928620    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-dc7d6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c8b5f04f-9213-46c2-bd06-e330b1668b3d" pod="kube-system/kindnet-dc7d6"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.928910    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2xxq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ee80a4c8-c171-4039-a5c1-ae20319deaf1" pod="kube-system/kube-proxy-f2xxq"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.929243    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-bz4lq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="043ed348-b26d-4228-942e-88494a373c9b" pod="kube-system/coredns-66bc5c9577-bz4lq"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.929807    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f7e5ad46cb5cb2295e601e47ada1517a" pod="kube-system/etcd-pause-879168"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.823219    1325 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-879168\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.839835    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-f2xxq\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="ee80a4c8-c171-4039-a5c1-ae20319deaf1" pod="kube-system/kube-proxy-f2xxq"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.842793    1325 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-879168\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.847186    1325 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-879168\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.009336    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-bz4lq\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="043ed348-b26d-4228-942e-88494a373c9b" pod="kube-system/coredns-66bc5c9577-bz4lq"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.050264    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="f7e5ad46cb5cb2295e601e47ada1517a" pod="kube-system/etcd-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.057638    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="66cd1c4e25b8653d72022362819b5204" pod="kube-system/kube-scheduler-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.062030    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="ced2c465da15e66896d67b58bacd7e98" pod="kube-system/kube-controller-manager-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.079522    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="0e11e4f5f3b2f19031ae2b14521bbeb9" pod="kube-system/kube-apiserver-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.097557    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-dc7d6\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="c8b5f04f-9213-46c2-bd06-e330b1668b3d" pod="kube-system/kindnet-dc7d6"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.119723    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-dc7d6\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="c8b5f04f-9213-46c2-bd06-e330b1668b3d" pod="kube-system/kindnet-dc7d6"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.148311    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-f2xxq\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="ee80a4c8-c171-4039-a5c1-ae20319deaf1" pod="kube-system/kube-proxy-f2xxq"
	Dec 16 05:24:44 pause-879168 kubelet[1325]: W1216 05:24:44.727282    1325 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 16 05:24:54 pause-879168 kubelet[1325]: W1216 05:24:54.756575    1325 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 16 05:24:55 pause-879168 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:24:55 pause-879168 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:24:55 pause-879168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-879168 -n pause-879168
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-879168 -n pause-879168: exit status 2 (687.392837ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-879168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-879168
helpers_test.go:244: (dbg) docker inspect pause-879168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d",
	        "Created": "2025-12-16T05:23:02.56832195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604962,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T05:23:02.660328851Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
	        "ResolvConfPath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/hostname",
	        "HostsPath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/hosts",
	        "LogPath": "/var/lib/docker/containers/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d/68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d-json.log",
	        "Name": "/pause-879168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-879168:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-879168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68b6e326a9a6d3fb336a75691ab3db429dd334dc1607f79d8bb420013100eb2d",
	                "LowerDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950-init/diff:/var/lib/docker/overlay2/64cb24f4d6f05ffb55cacbc496492ac303c33b515f4c1fac6e543dd16ae28032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c662b5067eecde3f880e2c63b9472521f85720a11db7d7980992fceb50a90950/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-879168",
	                "Source": "/var/lib/docker/volumes/pause-879168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-879168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-879168",
	                "name.minikube.sigs.k8s.io": "pause-879168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "884a060584b97a8ed2dce1b8bea8f58126e9ddad032c88fa9b2be221532f6f2c",
	            "SandboxKey": "/var/run/docker/netns/884a060584b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-879168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:28:08:b3:59:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d2fa6e76c2595ea0a91e6f4498896df8093ebd206776c444cfd3ab193a4a65c",
	                    "EndpointID": "dfdf6877162ad069739adfeb807270536d5c5a0513aad42f413f6159d53dee9b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-879168",
	                        "68b6e326a9a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-879168 -n pause-879168
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-879168 -n pause-879168: exit status 2 (503.006835ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-879168 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-879168 logs -n 25: (1.900781677s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-232318 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --cancel-scheduled                                                                           │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │ 16 Dec 25 05:21 UTC │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │                     │
	│ stop    │ -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:21 UTC │ 16 Dec 25 05:22 UTC │
	│ delete  │ -p scheduled-stop-232318                                                                                              │ scheduled-stop-232318       │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:22 UTC │
	│ start   │ -p insufficient-storage-725109 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-725109 │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │                     │
	│ delete  │ -p insufficient-storage-725109                                                                                        │ insufficient-storage-725109 │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:22 UTC │
	│ start   │ -p NoKubernetes-868033 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │                     │
	│ start   │ -p pause-879168 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-879168                │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p NoKubernetes-868033 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:22 UTC │ 16 Dec 25 05:23 UTC │
	│ start   │ -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:23 UTC │ 16 Dec 25 05:24 UTC │
	│ delete  │ -p NoKubernetes-868033                                                                                                │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ ssh     │ -p NoKubernetes-868033 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │                     │
	│ stop    │ -p NoKubernetes-868033                                                                                                │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p NoKubernetes-868033 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p pause-879168 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-879168                │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ ssh     │ -p NoKubernetes-868033 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │                     │
	│ delete  │ -p NoKubernetes-868033                                                                                                │ NoKubernetes-868033         │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │ 16 Dec 25 05:24 UTC │
	│ start   │ -p missing-upgrade-508979 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-508979      │ jenkins │ v1.35.0 │ 16 Dec 25 05:24 UTC │                     │
	│ pause   │ -p pause-879168 --alsologtostderr -v=5                                                                                │ pause-879168                │ jenkins │ v1.37.0 │ 16 Dec 25 05:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:24:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:24:30.662586  613493 out.go:345] Setting OutFile to fd 1 ...
	I1216 05:24:30.662719  613493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 05:24:30.662723  613493 out.go:358] Setting ErrFile to fd 2...
	I1216 05:24:30.662727  613493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 05:24:30.662953  613493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:24:30.663340  613493 out.go:352] Setting JSON to false
	I1216 05:24:30.664273  613493 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14817,"bootTime":1765847854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 05:24:30.664340  613493 start.go:139] virtualization:  
	I1216 05:24:30.668437  613493 out.go:177] * [missing-upgrade-508979] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I1216 05:24:30.671635  613493 out.go:177]   - MINIKUBE_LOCATION=22158
	I1216 05:24:30.671689  613493 notify.go:220] Checking for updates...
	I1216 05:24:30.678688  613493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:24:30.681798  613493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 05:24:30.685007  613493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 05:24:30.688190  613493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 05:24:30.694746  613493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:24:30.698307  613493 config.go:182] Loaded profile config "pause-879168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:24:30.698406  613493 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 05:24:30.743247  613493 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1216 05:24:30.743360  613493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:24:30.748162  613493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/last_update_check: {Name:mk828cc8382d2363b57ddbd6e2a4114ce0b4dd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:30.751825  613493 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1216 05:24:30.755377  613493 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1216 05:24:30.848296  613493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 05:24:30.837272435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:24:30.848390  613493 docker.go:318] overlay module found
	I1216 05:24:30.851537  613493 out.go:177] * Using the docker driver based on user configuration
	I1216 05:24:30.854458  613493 start.go:297] selected driver: docker
	I1216 05:24:30.854469  613493 start.go:901] validating driver "docker" against <nil>
	I1216 05:24:30.854483  613493 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:24:30.855225  613493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:24:30.937263  613493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 05:24:30.927536695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:24:30.937471  613493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 05:24:30.937733  613493 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 05:24:30.940711  613493 out.go:177] * Using Docker driver with root privileges
	I1216 05:24:30.943590  613493 cni.go:84] Creating CNI manager for ""
	I1216 05:24:30.943653  613493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:24:30.943660  613493 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 05:24:30.943744  613493 start.go:340] cluster config:
	{Name:missing-upgrade-508979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-508979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:24:30.946718  613493 out.go:177] * Starting "missing-upgrade-508979" primary control-plane node in "missing-upgrade-508979" cluster
	I1216 05:24:30.949627  613493 cache.go:121] Beginning downloading kic base image for docker with crio
	I1216 05:24:30.952922  613493 out.go:177] * Pulling base image v0.0.46 ...
	I1216 05:24:30.955848  613493 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 05:24:30.955936  613493 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1216 05:24:30.972301  613493 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I1216 05:24:30.972460  613493 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I1216 05:24:30.972503  613493 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I1216 05:24:31.007887  613493 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1216 05:24:31.007903  613493 cache.go:56] Caching tarball of preloaded images
	I1216 05:24:31.008064  613493 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 05:24:31.011493  613493 out.go:177] * Downloading Kubernetes v1.32.0 preload ...
	I1216 05:24:30.344645  612363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:24:30.344663  612363 machine.go:97] duration metric: took 6.486716767s to provisionDockerMachine
	I1216 05:24:30.344696  612363 start.go:293] postStartSetup for "pause-879168" (driver="docker")
	I1216 05:24:30.344709  612363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:24:30.344781  612363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:24:30.344902  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.375995  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.492361  612363 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:24:30.496232  612363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:24:30.496256  612363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:24:30.496267  612363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/addons for local assets ...
	I1216 05:24:30.496322  612363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-438353/.minikube/files for local assets ...
	I1216 05:24:30.496397  612363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem -> 4417272.pem in /etc/ssl/certs
	I1216 05:24:30.496501  612363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:24:30.510973  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 05:24:30.546897  612363 start.go:296] duration metric: took 202.154559ms for postStartSetup
	I1216 05:24:30.547018  612363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:24:30.547117  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.580239  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.683264  612363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:24:30.688740  612363 fix.go:56] duration metric: took 6.851204041s for fixHost
	I1216 05:24:30.688764  612363 start.go:83] releasing machines lock for "pause-879168", held for 6.851255348s
	I1216 05:24:30.688832  612363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-879168
	I1216 05:24:30.716701  612363 ssh_runner.go:195] Run: cat /version.json
	I1216 05:24:30.716773  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.717188  612363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:24:30.717253  612363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-879168
	I1216 05:24:30.762322  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.776302  612363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/pause-879168/id_rsa Username:docker}
	I1216 05:24:30.968533  612363 ssh_runner.go:195] Run: systemctl --version
	I1216 05:24:30.975102  612363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:24:31.021641  612363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:24:31.026306  612363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:24:31.026380  612363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:24:31.034489  612363 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:24:31.034513  612363 start.go:496] detecting cgroup driver to use...
	I1216 05:24:31.034545  612363 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:24:31.034593  612363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:24:31.049839  612363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:24:31.063612  612363 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:24:31.063690  612363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:24:31.080437  612363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:24:31.093438  612363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:24:31.232917  612363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:24:31.363660  612363 docker.go:234] disabling docker service ...
	I1216 05:24:31.363726  612363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:24:31.379621  612363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:24:31.393750  612363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:24:31.547729  612363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:24:31.683442  612363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:24:31.701434  612363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:24:31.718507  612363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:24:31.718568  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.738071  612363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 05:24:31.738160  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.747410  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.756345  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.766135  612363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:24:31.777230  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.786884  612363 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.795997  612363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:24:31.807162  612363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:24:31.818962  612363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:24:31.829444  612363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:24:31.970891  612363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:24:32.384359  612363 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:24:32.384436  612363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:24:32.388832  612363 start.go:564] Will wait 60s for crictl version
	I1216 05:24:32.388900  612363 ssh_runner.go:195] Run: which crictl
	I1216 05:24:32.393748  612363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:24:32.429673  612363 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1216 05:24:32.429773  612363 ssh_runner.go:195] Run: crio --version
	I1216 05:24:32.462262  612363 ssh_runner.go:195] Run: crio --version
	I1216 05:24:32.501095  612363 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1216 05:24:32.504034  612363 cli_runner.go:164] Run: docker network inspect pause-879168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:24:32.560528  612363 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1216 05:24:32.565609  612363 kubeadm.go:884] updating cluster {Name:pause-879168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-879168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:24:32.565750  612363 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:24:32.565801  612363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:24:32.607568  612363 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:24:32.607589  612363 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:24:32.607645  612363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:24:32.642279  612363 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:24:32.642300  612363 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:24:32.642307  612363 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1216 05:24:32.642421  612363 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-879168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-879168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:24:32.642499  612363 ssh_runner.go:195] Run: crio config
	I1216 05:24:32.711930  612363 cni.go:84] Creating CNI manager for ""
	I1216 05:24:32.712007  612363 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 05:24:32.712042  612363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:24:32.712094  612363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-879168 NodeName:pause-879168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:24:32.712283  612363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-879168"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:24:32.712401  612363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:24:32.721800  612363 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:24:32.721913  612363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:24:32.729887  612363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1216 05:24:32.743389  612363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:24:32.756933  612363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1216 05:24:32.771316  612363 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:24:32.776113  612363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:24:32.949145  612363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:24:32.965352  612363 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168 for IP: 192.168.76.2
	I1216 05:24:32.965371  612363 certs.go:195] generating shared ca certs ...
	I1216 05:24:32.965387  612363 certs.go:227] acquiring lock for ca certs: {Name:mkcd539774b4b035ba1dca5a8ff90a5a42b877f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:32.965554  612363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key
	I1216 05:24:32.965623  612363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key
	I1216 05:24:32.965641  612363 certs.go:257] generating profile certs ...
	I1216 05:24:32.965733  612363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.key
	I1216 05:24:32.965799  612363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/apiserver.key.5384c97b
	I1216 05:24:32.965841  612363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/proxy-client.key
	I1216 05:24:32.965957  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem (1338 bytes)
	W1216 05:24:32.965986  612363 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727_empty.pem, impossibly tiny 0 bytes
	I1216 05:24:32.965994  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:24:32.966020  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem (1078 bytes)
	I1216 05:24:32.966042  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:24:32.966065  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/certs/key.pem (1679 bytes)
	I1216 05:24:32.966115  612363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem (1708 bytes)
	I1216 05:24:32.966756  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:24:32.988925  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:24:33.021839  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:24:33.048954  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:24:33.070053  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:24:33.090617  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 05:24:33.110737  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:24:33.131017  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:24:33.153449  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/ssl/certs/4417272.pem --> /usr/share/ca-certificates/4417272.pem (1708 bytes)
	I1216 05:24:33.174909  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:24:33.194992  612363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-438353/.minikube/certs/441727.pem --> /usr/share/ca-certificates/441727.pem (1338 bytes)
	I1216 05:24:33.218710  612363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:24:33.233502  612363 ssh_runner.go:195] Run: openssl version
	I1216 05:24:33.240431  612363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.248987  612363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/441727.pem /etc/ssl/certs/441727.pem
	I1216 05:24:33.257703  612363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.262329  612363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:21 /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.262391  612363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/441727.pem
	I1216 05:24:33.304934  612363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:24:33.313351  612363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.321429  612363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4417272.pem /etc/ssl/certs/4417272.pem
	I1216 05:24:33.333921  612363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.338410  612363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:21 /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.338549  612363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4417272.pem
	I1216 05:24:33.384590  612363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:24:33.392912  612363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.401223  612363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:24:33.410037  612363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.414413  612363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:11 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.414555  612363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:24:33.456099  612363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:24:33.464498  612363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:24:33.468927  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:24:33.510874  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:24:33.552660  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:24:33.594051  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:24:33.636225  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:24:33.679116  612363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:24:33.758162  612363 kubeadm.go:401] StartCluster: {Name:pause-879168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-879168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:24:33.758336  612363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:24:33.758437  612363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:24:33.834111  612363 cri.go:89] found id: "d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309"
	I1216 05:24:33.834196  612363 cri.go:89] found id: "6d8570293bc3b615ca8558c9c245c34413db3307ea4c9dc1156de6be82366c43"
	I1216 05:24:33.834213  612363 cri.go:89] found id: "3e66591d8ee86b0879aeecb1a61f768173550e0389177364fa184b2694aff00f"
	I1216 05:24:33.834229  612363 cri.go:89] found id: "c0d025670a91a4d8a61391711a080e93e875e808cbaa29712ba6feb5636a12cc"
	I1216 05:24:33.834263  612363 cri.go:89] found id: "d8bd8959d629eec53cb3c82761a3da996cdd881c9d140609854bbf22b3702a51"
	I1216 05:24:33.834285  612363 cri.go:89] found id: "2e109aacd16433537ebfcc0e8f0693e4255203df82bdfdcb738267fffab893f0"
	I1216 05:24:33.834303  612363 cri.go:89] found id: "80e7a81bd9e8176865d8a2b2254d322cff4d032e109644dc1ff242823b19f2c2"
	I1216 05:24:33.834321  612363 cri.go:89] found id: ""
	I1216 05:24:33.834405  612363 ssh_runner.go:195] Run: sudo runc list -f json
	W1216 05:24:33.855034  612363 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T05:24:33Z" level=error msg="open /run/runc: no such file or directory"
	I1216 05:24:33.855158  612363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:24:33.887496  612363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:24:33.887569  612363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:24:33.887662  612363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:24:33.897298  612363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:24:33.897921  612363 kubeconfig.go:125] found "pause-879168" server: "https://192.168.76.2:8443"
	I1216 05:24:33.899061  612363 kapi.go:59] client config for pause-879168: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:24:33.899673  612363 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 05:24:33.899715  612363 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 05:24:33.899796  612363 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 05:24:33.899825  612363 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 05:24:33.899843  612363 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 05:24:33.900179  612363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:24:33.909837  612363 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1216 05:24:33.909914  612363 kubeadm.go:602] duration metric: took 22.326346ms to restartPrimaryControlPlane
	I1216 05:24:33.909939  612363 kubeadm.go:403] duration metric: took 151.788009ms to StartCluster
	I1216 05:24:33.909986  612363 settings.go:142] acquiring lock: {Name:mk7579526d30444d4a36dd9eeacfd82389e55168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:33.910060  612363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 05:24:33.910782  612363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/kubeconfig: {Name:mk423646e92eb7ee22928a9ef39d81e213a8d27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:33.911277  612363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:24:33.911669  612363 config.go:182] Loaded profile config "pause-879168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:24:33.911754  612363 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:24:33.923093  612363 out.go:179] * Verifying Kubernetes components...
	I1216 05:24:33.923253  612363 out.go:179] * Enabled addons: 
	I1216 05:24:31.014458  613493 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 05:24:31.097125  613493 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:d3dc3b83b826438926b7b91af837ed7b -> /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1216 05:24:33.932039  612363 addons.go:530] duration metric: took 20.27861ms for enable addons: enabled=[]
	I1216 05:24:33.932184  612363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:24:34.727944  612363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:24:34.750514  612363 node_ready.go:35] waiting up to 6m0s for node "pause-879168" to be "Ready" ...
	I1216 05:24:37.490761  613493 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I1216 05:24:37.490775  613493 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I1216 05:24:38.243273  613493 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 05:24:38.243448  613493 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 05:24:40.173795  613493 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 05:24:40.173912  613493 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/missing-upgrade-508979/config.json ...
	I1216 05:24:40.173937  613493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/missing-upgrade-508979/config.json: {Name:mkd1727fdfb27a771a55ed04579d60062c7c0da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:24:42.063702  612363 node_ready.go:49] node "pause-879168" is "Ready"
	I1216 05:24:42.063732  612363 node_ready.go:38] duration metric: took 7.313187368s for node "pause-879168" to be "Ready" ...
	I1216 05:24:42.063747  612363 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:24:42.063820  612363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:24:42.105470  612363 api_server.go:72] duration metric: took 8.19412301s to wait for apiserver process to appear ...
	I1216 05:24:42.105510  612363 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:24:42.105530  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:42.231169  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:42.231261  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:42.605647  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:42.644449  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:42.644617  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:43.105658  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:43.137880  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:43.137977  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:43.606167  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:43.635990  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:43.636087  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:44.105641  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:44.122200  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:24:44.122279  612363 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:24:44.605648  612363 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1216 05:24:44.621632  612363 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1216 05:24:44.623304  612363 api_server.go:141] control plane version: v1.34.2
	I1216 05:24:44.623383  612363 api_server.go:131] duration metric: took 2.517855003s to wait for apiserver health ...
	I1216 05:24:44.623407  612363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:24:44.633735  612363 system_pods.go:59] 7 kube-system pods found
	I1216 05:24:44.633846  612363 system_pods.go:61] "coredns-66bc5c9577-bz4lq" [043ed348-b26d-4228-942e-88494a373c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:24:44.633874  612363 system_pods.go:61] "etcd-pause-879168" [b1c55721-1051-4e78-a67e-9503b65225b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:24:44.633894  612363 system_pods.go:61] "kindnet-dc7d6" [c8b5f04f-9213-46c2-bd06-e330b1668b3d] Running
	I1216 05:24:44.633931  612363 system_pods.go:61] "kube-apiserver-pause-879168" [ccbce79a-a32b-44c5-9faf-438f9f887e93] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:24:44.633959  612363 system_pods.go:61] "kube-controller-manager-pause-879168" [ff071a5d-668d-440c-9956-b3a41041cfdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:24:44.633986  612363 system_pods.go:61] "kube-proxy-f2xxq" [ee80a4c8-c171-4039-a5c1-ae20319deaf1] Running
	I1216 05:24:44.634018  612363 system_pods.go:61] "kube-scheduler-pause-879168" [5709dd8a-ff9a-4e27-9f11-104c657d37b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:24:44.634043  612363 system_pods.go:74] duration metric: took 10.613259ms to wait for pod list to return data ...
	I1216 05:24:44.634064  612363 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:24:44.642332  612363 default_sa.go:45] found service account: "default"
	I1216 05:24:44.642415  612363 default_sa.go:55] duration metric: took 8.331446ms for default service account to be created ...
	I1216 05:24:44.642441  612363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:24:44.647414  612363 system_pods.go:86] 7 kube-system pods found
	I1216 05:24:44.647500  612363 system_pods.go:89] "coredns-66bc5c9577-bz4lq" [043ed348-b26d-4228-942e-88494a373c9b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:24:44.647527  612363 system_pods.go:89] "etcd-pause-879168" [b1c55721-1051-4e78-a67e-9503b65225b9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:24:44.647562  612363 system_pods.go:89] "kindnet-dc7d6" [c8b5f04f-9213-46c2-bd06-e330b1668b3d] Running
	I1216 05:24:44.647586  612363 system_pods.go:89] "kube-apiserver-pause-879168" [ccbce79a-a32b-44c5-9faf-438f9f887e93] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:24:44.647607  612363 system_pods.go:89] "kube-controller-manager-pause-879168" [ff071a5d-668d-440c-9956-b3a41041cfdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:24:44.647626  612363 system_pods.go:89] "kube-proxy-f2xxq" [ee80a4c8-c171-4039-a5c1-ae20319deaf1] Running
	I1216 05:24:44.647666  612363 system_pods.go:89] "kube-scheduler-pause-879168" [5709dd8a-ff9a-4e27-9f11-104c657d37b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:24:44.647688  612363 system_pods.go:126] duration metric: took 5.226937ms to wait for k8s-apps to be running ...
	I1216 05:24:44.647724  612363 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:24:44.647811  612363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:24:44.690972  612363 system_svc.go:56] duration metric: took 43.252593ms WaitForService to wait for kubelet
	I1216 05:24:44.691054  612363 kubeadm.go:587] duration metric: took 10.779715079s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:24:44.691088  612363 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:24:44.704871  612363 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 05:24:44.704951  612363 node_conditions.go:123] node cpu capacity is 2
	I1216 05:24:44.704979  612363 node_conditions.go:105] duration metric: took 13.868104ms to run NodePressure ...
	I1216 05:24:44.705018  612363 start.go:242] waiting for startup goroutines ...
	I1216 05:24:44.705041  612363 start.go:247] waiting for cluster config update ...
	I1216 05:24:44.705090  612363 start.go:256] writing updated cluster config ...
	I1216 05:24:44.705442  612363 ssh_runner.go:195] Run: rm -f paused
	I1216 05:24:44.717619  612363 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:24:44.718292  612363 kapi.go:59] client config for pause-879168: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/profiles/pause-879168/client.key", CAFile:"/home/jenkins/minikube-integration/22158-438353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:24:44.744423  612363 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bz4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:46.257170  612363 pod_ready.go:94] pod "coredns-66bc5c9577-bz4lq" is "Ready"
	I1216 05:24:46.257245  612363 pod_ready.go:86] duration metric: took 1.512746999s for pod "coredns-66bc5c9577-bz4lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:46.260105  612363 pod_ready.go:83] waiting for pod "etcd-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:24:48.269196  612363 pod_ready.go:104] pod "etcd-pause-879168" is not "Ready", error: <nil>
	W1216 05:24:50.765718  612363 pod_ready.go:104] pod "etcd-pause-879168" is not "Ready", error: <nil>
	I1216 05:24:52.266051  612363 pod_ready.go:94] pod "etcd-pause-879168" is "Ready"
	I1216 05:24:52.266127  612363 pod_ready.go:86] duration metric: took 6.005952213s for pod "etcd-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:52.268764  612363 pod_ready.go:83] waiting for pod "kube-apiserver-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:52.273646  612363 pod_ready.go:94] pod "kube-apiserver-pause-879168" is "Ready"
	I1216 05:24:52.273676  612363 pod_ready.go:86] duration metric: took 4.88345ms for pod "kube-apiserver-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:52.275806  612363 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:24:54.281150  612363 pod_ready.go:104] pod "kube-controller-manager-pause-879168" is not "Ready", error: <nil>
	I1216 05:24:54.782402  612363 pod_ready.go:94] pod "kube-controller-manager-pause-879168" is "Ready"
	I1216 05:24:54.782428  612363 pod_ready.go:86] duration metric: took 2.506595314s for pod "kube-controller-manager-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:54.786325  612363 pod_ready.go:83] waiting for pod "kube-proxy-f2xxq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:54.792579  612363 pod_ready.go:94] pod "kube-proxy-f2xxq" is "Ready"
	I1216 05:24:54.792655  612363 pod_ready.go:86] duration metric: took 6.305529ms for pod "kube-proxy-f2xxq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:54.795668  612363 pod_ready.go:83] waiting for pod "kube-scheduler-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:55.064929  612363 pod_ready.go:94] pod "kube-scheduler-pause-879168" is "Ready"
	I1216 05:24:55.064962  612363 pod_ready.go:86] duration metric: took 269.271553ms for pod "kube-scheduler-pause-879168" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:24:55.064975  612363 pod_ready.go:40] duration metric: took 10.347271328s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:24:55.146719  612363 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1216 05:24:55.151867  612363 out.go:179] * Done! kubectl is now configured to use "pause-879168" cluster and "default" namespace by default
	I1216 05:24:58.439225  613493 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I1216 05:24:58.439266  613493 cache.go:227] Successfully downloaded all kic artifacts
	I1216 05:24:58.439310  613493 start.go:360] acquireMachinesLock for missing-upgrade-508979: {Name:mk4fc08d859c2010e67b3ba78ba312478df330aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:24:58.439422  613493 start.go:364] duration metric: took 94.712µs to acquireMachinesLock for "missing-upgrade-508979"
	I1216 05:24:58.439446  613493 start.go:93] Provisioning new machine with config: &{Name:missing-upgrade-508979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-508979 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:24:58.439515  613493 start.go:125] createHost starting for "" (driver="docker")
	I1216 05:24:58.443272  613493 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 05:24:58.443576  613493 start.go:159] libmachine.API.Create for "missing-upgrade-508979" (driver="docker")
	I1216 05:24:58.443605  613493 client.go:168] LocalClient.Create starting
	I1216 05:24:58.443680  613493 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-438353/.minikube/certs/ca.pem
	I1216 05:24:58.443717  613493 main.go:141] libmachine: Decoding PEM data...
	I1216 05:24:58.443731  613493 main.go:141] libmachine: Parsing certificate...
	I1216 05:24:58.443787  613493 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-438353/.minikube/certs/cert.pem
	I1216 05:24:58.443803  613493 main.go:141] libmachine: Decoding PEM data...
	I1216 05:24:58.443812  613493 main.go:141] libmachine: Parsing certificate...
	I1216 05:24:58.444213  613493 cli_runner.go:164] Run: docker network inspect missing-upgrade-508979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 05:24:58.473543  613493 cli_runner.go:211] docker network inspect missing-upgrade-508979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 05:24:58.473616  613493 network_create.go:284] running [docker network inspect missing-upgrade-508979] to gather additional debugging logs...
	I1216 05:24:58.473632  613493 cli_runner.go:164] Run: docker network inspect missing-upgrade-508979
	W1216 05:24:58.492520  613493 cli_runner.go:211] docker network inspect missing-upgrade-508979 returned with exit code 1
	I1216 05:24:58.492540  613493 network_create.go:287] error running [docker network inspect missing-upgrade-508979]: docker network inspect missing-upgrade-508979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-508979 not found
	I1216 05:24:58.492559  613493 network_create.go:289] output of [docker network inspect missing-upgrade-508979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-508979 not found
	
	** /stderr **
	I1216 05:24:58.492663  613493 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 05:24:58.512891  613493 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66a1741c73ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:45:79:86:27:66} reservation:<nil>}
	I1216 05:24:58.513317  613493 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d27f32a0237f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:74:e9:6d:a1:43} reservation:<nil>}
	I1216 05:24:58.513615  613493 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5beb726a92d1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:21:8b:0e:44:88} reservation:<nil>}
	I1216 05:24:58.513977  613493 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3d2fa6e76c25 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:83:66:ec:01:34} reservation:<nil>}
	I1216 05:24:58.514588  613493 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001718960}
	I1216 05:24:58.514608  613493 network_create.go:124] attempt to create docker network missing-upgrade-508979 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 05:24:58.514673  613493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-508979 missing-upgrade-508979
	I1216 05:24:58.587515  613493 network_create.go:108] docker network missing-upgrade-508979 192.168.85.0/24 created
	I1216 05:24:58.587538  613493 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-508979" container
	I1216 05:24:58.587620  613493 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 05:24:58.607901  613493 cli_runner.go:164] Run: docker volume create missing-upgrade-508979 --label name.minikube.sigs.k8s.io=missing-upgrade-508979 --label created_by.minikube.sigs.k8s.io=true
	I1216 05:24:58.634706  613493 oci.go:103] Successfully created a docker volume missing-upgrade-508979
	I1216 05:24:58.634814  613493 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-508979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-508979 --entrypoint /usr/bin/test -v missing-upgrade-508979:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	
	
	==> CRI-O <==
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.165864498Z" level=info msg="Started container" PID=2387 containerID=0ecb3cb231904dc7f5c6ab5a546ad2edc08955e1ecbc8c04bffec5e146eb5865 description=kube-system/etcd-pause-879168/etcd id=26f2e24a-1438-44e5-9565-75dea39aa55b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad6a660e6a103a08a7eb2deffe94059769c068fcc5def3929c833e65380bb591
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.178752465Z" level=info msg="Started container" PID=2400 containerID=292fc57a6b2f05e0366768d4818f2f82aa3678cab45473b441a002b1c2edf832 description=kube-system/kindnet-dc7d6/kindnet-cni id=c494f42d-5f25-4e83-a319-93beaa5f3c43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a1dc035379f6c8641fdd60b31fc19c443a6f0c53bf0d1c7538c69fbf0b4a779
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.189832625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.190588277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.242641589Z" level=info msg="Created container 6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092: kube-system/coredns-66bc5c9577-bz4lq/coredns" id=7e5c5f02-d62f-45ea-9f1e-38309d6080f4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.243411427Z" level=info msg="Starting container: 6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092" id=b1a62bee-c222-4272-bd59-73ffcdff5930 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:24:34 pause-879168 crio[2115]: time="2025-12-16T05:24:34.245381221Z" level=info msg="Started container" PID=2443 containerID=6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092 description=kube-system/coredns-66bc5c9577-bz4lq/coredns id=b1a62bee-c222-4272-bd59-73ffcdff5930 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4dda7b121b7cd3d16d36abb3bcb791c3e037c34e624ae6bfc469a92ba0e69250
	Dec 16 05:24:35 pause-879168 crio[2115]: time="2025-12-16T05:24:35.153404671Z" level=info msg="Created container d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a: kube-system/kube-proxy-f2xxq/kube-proxy" id=aa54f715-848d-4a4e-a127-44f7471f81a0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 05:24:35 pause-879168 crio[2115]: time="2025-12-16T05:24:35.154617Z" level=info msg="Starting container: d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a" id=6546d864-1c20-4041-b2ae-df5ffc53b084 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 05:24:35 pause-879168 crio[2115]: time="2025-12-16T05:24:35.157751926Z" level=info msg="Started container" PID=2446 containerID=d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a description=kube-system/kube-proxy-f2xxq/kube-proxy id=6546d864-1c20-4041-b2ae-df5ffc53b084 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6772a0aa353c2009df4b08b88131ee8d4cf1eb71f0bf77d431b0ce5860fa6021
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.748956663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.75545284Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.755622507Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.755704059Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.769329658Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.769518698Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.769619983Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.779440708Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.779726119Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.779809295Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.789309057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.789566619Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.789733939Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.797924871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 16 05:24:44 pause-879168 crio[2115]: time="2025-12-16T05:24:44.798096458Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d02fe2c02c60f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   27 seconds ago       Running             kube-proxy                1                   6772a0aa353c2       kube-proxy-f2xxq                       kube-system
	6dcef43081a8a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   27 seconds ago       Running             coredns                   1                   4dda7b121b7cd       coredns-66bc5c9577-bz4lq               kube-system
	292fc57a6b2f0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   28 seconds ago       Running             kindnet-cni               1                   0a1dc035379f6       kindnet-dc7d6                          kube-system
	0ecb3cb231904       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   28 seconds ago       Running             etcd                      1                   ad6a660e6a103       etcd-pause-879168                      kube-system
	e815c7290489a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   28 seconds ago       Running             kube-scheduler            1                   0beacfc985bd4       kube-scheduler-pause-879168            kube-system
	6daae05879a8b       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   28 seconds ago       Running             kube-controller-manager   1                   5d830a872fa5c       kube-controller-manager-pause-879168   kube-system
	6b8e81e70d403       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   28 seconds ago       Running             kube-apiserver            1                   933759dd9eb06       kube-apiserver-pause-879168            kube-system
	d1c7aee14d104       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   4dda7b121b7cd       coredns-66bc5c9577-bz4lq               kube-system
	6d8570293bc3b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   0a1dc035379f6       kindnet-dc7d6                          kube-system
	3e66591d8ee86       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   6772a0aa353c2       kube-proxy-f2xxq                       kube-system
	c0d025670a91a       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   933759dd9eb06       kube-apiserver-pause-879168            kube-system
	d8bd8959d629e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   5d830a872fa5c       kube-controller-manager-pause-879168   kube-system
	2e109aacd1643       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   0beacfc985bd4       kube-scheduler-pause-879168            kube-system
	80e7a81bd9e81       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   ad6a660e6a103       etcd-pause-879168                      kube-system
	
	
	==> coredns [6dcef43081a8a3d3ed146b61fae602b4d2bfcf12509a31a825edd7f574f62092] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56272 - 48153 "HINFO IN 6667063183447398688.7611637045416865144. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019291457s
	
	
	==> coredns [d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59557 - 26781 "HINFO IN 2066334530245165092.2205259519758289492. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025914832s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-879168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-879168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=pause-879168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_23_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:23:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-879168
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:24:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:23:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:23:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:23:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:24:21 +0000   Tue, 16 Dec 2025 05:24:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-879168
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b01d95696b577408f2b2782693c8bc0
	  System UUID:                9ffec5f8-a07d-409c-8e82-bddcfcb65e99
	  Boot ID:                    e72ece1f-d416-4c20-8564-468e8b5f7888
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bz4lq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     83s
	  kube-system                 etcd-pause-879168                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         88s
	  kube-system                 kindnet-dc7d6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-pause-879168             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-879168    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-f2xxq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-pause-879168             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 81s                  kube-proxy       
	  Normal   Starting                 18s                  kube-proxy       
	  Warning  CgroupV1                 100s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node pause-879168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node pause-879168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s (x8 over 100s)  kubelet          Node pause-879168 status is now: NodeHasSufficientPID
	  Normal   Starting                 88s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 88s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s                  kubelet          Node pause-879168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s                  kubelet          Node pause-879168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s                  kubelet          Node pause-879168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           84s                  node-controller  Node pause-879168 event: Registered Node pause-879168 in Controller
	  Normal   NodeReady                41s                  kubelet          Node pause-879168 status is now: NodeReady
	  Normal   RegisteredNode           14s                  node-controller  Node pause-879168 event: Registered Node pause-879168 in Controller
	
	
	==> dmesg <==
	[Dec16 04:58] overlayfs: idmapped layers are currently not supported
	[  +2.957541] overlayfs: idmapped layers are currently not supported
	[Dec16 04:59] overlayfs: idmapped layers are currently not supported
	[Dec16 05:01] overlayfs: idmapped layers are currently not supported
	[Dec16 05:02] overlayfs: idmapped layers are currently not supported
	[  +4.043407] overlayfs: idmapped layers are currently not supported
	[Dec16 05:03] overlayfs: idmapped layers are currently not supported
	[Dec16 05:04] overlayfs: idmapped layers are currently not supported
	[Dec16 05:05] overlayfs: idmapped layers are currently not supported
	[Dec16 05:10] overlayfs: idmapped layers are currently not supported
	[Dec16 05:11] overlayfs: idmapped layers are currently not supported
	[Dec16 05:12] overlayfs: idmapped layers are currently not supported
	[Dec16 05:13] overlayfs: idmapped layers are currently not supported
	[Dec16 05:14] overlayfs: idmapped layers are currently not supported
	[Dec16 05:16] overlayfs: idmapped layers are currently not supported
	[ +25.166334] overlayfs: idmapped layers are currently not supported
	[  +0.467202] overlayfs: idmapped layers are currently not supported
	[Dec16 05:17] overlayfs: idmapped layers are currently not supported
	[ +18.764288] overlayfs: idmapped layers are currently not supported
	[Dec16 05:18] overlayfs: idmapped layers are currently not supported
	[ +26.071219] overlayfs: idmapped layers are currently not supported
	[Dec16 05:20] overlayfs: idmapped layers are currently not supported
	[Dec16 05:21] overlayfs: idmapped layers are currently not supported
	[Dec16 05:23] overlayfs: idmapped layers are currently not supported
	[  +3.507219] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ecb3cb231904dc7f5c6ab5a546ad2edc08955e1ecbc8c04bffec5e146eb5865] <==
	{"level":"warn","ts":"2025-12-16T05:24:37.801394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.828467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.844599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.868225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.883592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.937699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:37.978050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.016331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.058125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.112505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.147455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.246043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.294731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.388546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.584622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.706112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.745194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.838385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:38.966572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.058950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.157252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.226601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.294284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.374291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:24:39.576753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	
	
	==> etcd [80e7a81bd9e8176865d8a2b2254d322cff4d032e109644dc1ff242823b19f2c2] <==
	{"level":"warn","ts":"2025-12-16T05:23:28.468565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.496134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.534682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.561869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.599183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.613293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:23:28.756038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49158","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T05:24:25.079148Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-16T05:24:25.079208Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-879168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-16T05:24:25.079307Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T05:24:25.357325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T05:24:25.357526Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T05:24:25.357594Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357607Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357672Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-16T05:24:25.357700Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-12-16T05:24:25.357703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357675Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T05:24:25.357728Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T05:24:25.357736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T05:24:25.357747Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-16T05:24:25.360965Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-16T05:24:25.361050Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T05:24:25.361117Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-16T05:24:25.361126Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-879168","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 05:25:02 up  4:07,  0 user,  load average: 3.42, 2.04, 1.73
	Linux pause-879168 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [292fc57a6b2f05e0366768d4818f2f82aa3678cab45473b441a002b1c2edf832] <==
	I1216 05:24:34.363026       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:24:34.363396       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 05:24:34.363569       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:24:34.363612       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:24:34.363649       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:24:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:24:34.747053       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:24:34.747755       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:24:34.752416       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:24:34.753613       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1216 05:24:42.353003       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:24:42.353151       1 metrics.go:72] Registering metrics
	I1216 05:24:42.353263       1 controller.go:711] "Syncing nftables rules"
	I1216 05:24:44.745205       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:24:44.747838       1 main.go:301] handling current node
	I1216 05:24:54.744963       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:24:54.745007       1 main.go:301] handling current node
	
	
	==> kindnet [6d8570293bc3b615ca8558c9c245c34413db3307ea4c9dc1156de6be82366c43] <==
	I1216 05:23:40.431896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 05:23:40.517225       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1216 05:23:40.517502       1 main.go:148] setting mtu 1500 for CNI 
	I1216 05:23:40.517548       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 05:23:40.517581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-16T05:23:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1216 05:23:40.718506       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1216 05:23:40.718581       1 controller.go:381] "Waiting for informer caches to sync"
	I1216 05:23:40.718613       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1216 05:23:40.719769       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1216 05:24:10.719279       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1216 05:24:10.719393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1216 05:24:10.719484       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1216 05:24:10.720778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1216 05:24:12.220821       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1216 05:24:12.220919       1 metrics.go:72] Registering metrics
	I1216 05:24:12.221025       1 controller.go:711] "Syncing nftables rules"
	I1216 05:24:20.725148       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1216 05:24:20.725203       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6b8e81e70d40373c6eb323cdec44bd51871ee3925462b7f451a590587032fedb] <==
	I1216 05:24:41.963516       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 05:24:42.105394       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 05:24:42.105614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 05:24:42.105910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:24:42.117741       1 aggregator.go:171] initial CRD sync complete...
	I1216 05:24:42.117842       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 05:24:42.117875       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 05:24:42.117905       1 cache.go:39] Caches are synced for autoregister controller
	I1216 05:24:42.118448       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:24:42.150841       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 05:24:42.151953       1 policy_source.go:240] refreshing policies
	I1216 05:24:42.159379       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 05:24:42.159539       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:24:42.172462       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 05:24:42.184885       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 05:24:42.188039       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 05:24:42.188232       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:24:42.198792       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 05:24:42.193256       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:24:42.246641       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 05:24:42.288671       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 05:24:42.290480       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1216 05:24:42.328652       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:24:42.663962       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:24:46.743503       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-apiserver [c0d025670a91a4d8a61391711a080e93e875e808cbaa29712ba6feb5636a12cc] <==
	W1216 05:24:25.110846       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.110945       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111024       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111097       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111180       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.111265       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112069       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112265       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112401       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112481       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112523       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112578       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112640       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112682       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112784       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112788       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112846       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112892       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112885       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112940       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112962       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.112984       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.113031       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.113034       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 05:24:25.113114       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6daae05879a8bfbcb59c78c8282efa943812c98cbe80bf9f862169baef894f22] <==
	I1216 05:24:48.232896       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 05:24:48.233550       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 05:24:48.233646       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 05:24:48.234489       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1216 05:24:48.234612       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1216 05:24:48.234656       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 05:24:48.234735       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 05:24:48.240662       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 05:24:48.243172       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 05:24:48.246410       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:24:48.269491       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:24:48.273865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:24:48.273977       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:24:48.274066       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:24:48.274141       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:24:48.274175       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:24:48.274203       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:24:48.281227       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 05:24:48.281637       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 05:24:48.281779       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 05:24:48.281874       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:24:48.281909       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 05:24:48.281943       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 05:24:48.288407       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 05:24:48.298439       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-controller-manager [d8bd8959d629eec53cb3c82761a3da996cdd881c9d140609854bbf22b3702a51] <==
	I1216 05:23:38.311934       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 05:23:38.316431       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 05:23:38.311952       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 05:23:38.311970       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 05:23:38.311991       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:23:38.318061       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 05:23:38.318181       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 05:23:38.318272       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-879168"
	I1216 05:23:38.318355       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1216 05:23:38.318402       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 05:23:38.323757       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 05:23:38.331090       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 05:23:38.337431       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 05:23:38.345502       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 05:23:38.349296       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-879168" podCIDRs=["10.244.0.0/24"]
	I1216 05:23:38.358534       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 05:23:38.359457       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 05:23:38.359568       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 05:23:38.360311       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 05:23:38.361918       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:23:38.365146       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 05:23:38.365272       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 05:23:38.365809       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 05:23:38.387169       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:24:23.327766       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3e66591d8ee86b0879aeecb1a61f768173550e0389177364fa184b2694aff00f] <==
	I1216 05:23:40.360083       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:23:40.473331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:23:40.574478       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:23:40.574595       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 05:23:40.574699       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:23:40.672953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:23:40.673009       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:23:40.679954       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:23:40.680335       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:23:40.680537       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:23:40.682051       1 config.go:200] "Starting service config controller"
	I1216 05:23:40.682123       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:23:40.682167       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:23:40.682196       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:23:40.682232       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:23:40.682261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:23:40.682967       1 config.go:309] "Starting node config controller"
	I1216 05:23:40.689584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:23:40.689677       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:23:40.782377       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:23:40.782490       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:23:40.782521       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d02fe2c02c60f0f8687e22e1906cf222bb5b842f348a0412d32917e5dcfe0e2a] <==
	I1216 05:24:35.896662       1 server_linux.go:53] "Using iptables proxy"
	I1216 05:24:40.174956       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:24:42.491601       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:24:42.515087       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1216 05:24:42.585328       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:24:43.340872       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 05:24:43.342174       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:24:43.382347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:24:43.382819       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:24:43.391188       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:24:43.434425       1 config.go:200] "Starting service config controller"
	I1216 05:24:43.434522       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:24:43.435361       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:24:43.456693       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:24:43.456453       1 config.go:309] "Starting node config controller"
	I1216 05:24:43.468207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:24:43.468662       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:24:43.453420       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:24:43.469487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:24:43.563708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:24:43.585219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 05:24:43.614176       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2e109aacd16433537ebfcc0e8f0693e4255203df82bdfdcb738267fffab893f0] <==
	I1216 05:23:29.568546       1 serving.go:386] Generated self-signed cert in-memory
	I1216 05:23:33.094455       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:23:33.094493       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:23:33.102803       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 05:23:33.102924       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 05:23:33.102992       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:23:33.103031       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:23:33.103072       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:23:33.103129       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:23:33.103271       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:23:33.103345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:23:33.209695       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:23:33.209743       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 05:23:33.209830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:25.079750       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1216 05:24:25.079778       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1216 05:24:25.079799       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 05:24:25.079826       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:25.079846       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1216 05:24:25.079860       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:25.080141       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1216 05:24:25.080167       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e815c7290489a4f8e21f38a344e67da2bf330eddc5d3f56582952cc63031840b] <==
	I1216 05:24:41.710119       1 serving.go:386] Generated self-signed cert in-memory
	I1216 05:24:43.315174       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:24:43.315210       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:24:43.361802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:24:43.361960       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 05:24:43.368432       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 05:24:43.362793       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:24:43.385658       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:43.386318       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:24:43.386829       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:43.386880       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:43.469156       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 05:24:43.486954       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 05:24:43.501525       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 05:24:33 pause-879168 kubelet[1325]: I1216 05:24:33.926903    1325 scope.go:117] "RemoveContainer" containerID="d1c7aee14d1048b18fcd07209b943009d9a85a69d0c5cee668acd989fb9ed309"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.927651    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="66cd1c4e25b8653d72022362819b5204" pod="kube-system/kube-scheduler-pause-879168"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.927982    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ced2c465da15e66896d67b58bacd7e98" pod="kube-system/kube-controller-manager-pause-879168"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.928304    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0e11e4f5f3b2f19031ae2b14521bbeb9" pod="kube-system/kube-apiserver-pause-879168"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.928620    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-dc7d6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c8b5f04f-9213-46c2-bd06-e330b1668b3d" pod="kube-system/kindnet-dc7d6"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.928910    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2xxq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ee80a4c8-c171-4039-a5c1-ae20319deaf1" pod="kube-system/kube-proxy-f2xxq"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.929243    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-bz4lq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="043ed348-b26d-4228-942e-88494a373c9b" pod="kube-system/coredns-66bc5c9577-bz4lq"
	Dec 16 05:24:33 pause-879168 kubelet[1325]: E1216 05:24:33.929807    1325 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-879168\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f7e5ad46cb5cb2295e601e47ada1517a" pod="kube-system/etcd-pause-879168"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.823219    1325 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-879168\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.839835    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-f2xxq\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="ee80a4c8-c171-4039-a5c1-ae20319deaf1" pod="kube-system/kube-proxy-f2xxq"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.842793    1325 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-879168\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 16 05:24:41 pause-879168 kubelet[1325]: E1216 05:24:41.847186    1325 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-879168\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.009336    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-bz4lq\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="043ed348-b26d-4228-942e-88494a373c9b" pod="kube-system/coredns-66bc5c9577-bz4lq"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.050264    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="f7e5ad46cb5cb2295e601e47ada1517a" pod="kube-system/etcd-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.057638    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="66cd1c4e25b8653d72022362819b5204" pod="kube-system/kube-scheduler-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.062030    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="ced2c465da15e66896d67b58bacd7e98" pod="kube-system/kube-controller-manager-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.079522    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-879168\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="0e11e4f5f3b2f19031ae2b14521bbeb9" pod="kube-system/kube-apiserver-pause-879168"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.097557    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-dc7d6\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="c8b5f04f-9213-46c2-bd06-e330b1668b3d" pod="kube-system/kindnet-dc7d6"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.119723    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-dc7d6\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="c8b5f04f-9213-46c2-bd06-e330b1668b3d" pod="kube-system/kindnet-dc7d6"
	Dec 16 05:24:42 pause-879168 kubelet[1325]: E1216 05:24:42.148311    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-f2xxq\" is forbidden: User \"system:node:pause-879168\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-879168' and this object" podUID="ee80a4c8-c171-4039-a5c1-ae20319deaf1" pod="kube-system/kube-proxy-f2xxq"
	Dec 16 05:24:44 pause-879168 kubelet[1325]: W1216 05:24:44.727282    1325 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 16 05:24:54 pause-879168 kubelet[1325]: W1216 05:24:54.756575    1325 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 16 05:24:55 pause-879168 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 16 05:24:55 pause-879168 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 16 05:24:55 pause-879168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-879168 -n pause-879168
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-879168 -n pause-879168: exit status 2 (481.165737ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-879168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (7200.086s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-989845 exec deployment/netcat -- nslookup kubernetes.default
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (37m22s)
		TestNetworkPlugins/group/flannel (1m23s)
		TestNetworkPlugins/group/flannel/DNS (0s)
		TestStartStop (38m3s)
		TestStartStop/group/no-preload (28m49s)
		TestStartStop/group/no-preload/serial (28m49s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (3m45s)

                                                
                                                
goroutine 6450 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000484c40, 0x4001269bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400045e2a0, {0x534c680, 0x2c, 0x2c}, {0x4001269d08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x400078ac80)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x400078ac80)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 3466 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x400206afc0, 0x400177e5d0)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3149
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3779 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400206aa80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3775
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 166 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000347620, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4216 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400206b880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4215
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3437 [chan receive, 38 minutes]:
testing.(*testState).waitParallel(0x40006b24b0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40013cc000)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40013cc000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40013cc000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40013cc000, 0x40014fe100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3466
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 165 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40006a2300?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6079 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x5b1b206d305b1b91?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6073
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6359 [IO wait]:
internal/poll.runtime_pollWait(0xffff63d95a00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001806500?, 0x40018be000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001806500, {0x40018be000, 0x2500, 0x2500})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
net.(*netFD).Read(0x4001806500, {0x40018be000?, 0x40018be05a?, 0x5?})
	/usr/local/go/src/net/fd_posix.go:68 +0x28
net.(*conn).Read(0x4000499580, {0x40018be000?, 0x40000d28a8?, 0x8b27c?})
	/usr/local/go/src/net/net.go:196 +0x34
crypto/tls.(*atLeastReader).Read(0x4001a4f680, {0x40018be000?, 0x40000d2908?, 0x2cbb64?})
	/usr/local/go/src/crypto/tls/conn.go:816 +0x38
bytes.(*Buffer).ReadFrom(0x4001623ea8, {0x369ec40, 0x4001a4f680})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
crypto/tls.(*Conn).readFromUntil(0x4001623c08, {0xffff63e5b000, 0x400135c870}, 0x40000d29b0?)
	/usr/local/go/src/crypto/tls/conn.go:838 +0xcc
crypto/tls.(*Conn).readRecordOrCCS(0x4001623c08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:627 +0x340
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:589
crypto/tls.(*Conn).Read(0x4001623c08, {0x400186d000, 0x1000, 0x4000000000?})
	/usr/local/go/src/crypto/tls/conn.go:1392 +0x14c
bufio.(*Reader).Read(0x40019cd260, {0x4004ef8e44, 0x9, 0x542a60?})
	/usr/local/go/src/bufio/bufio.go:245 +0x188
io.ReadAtLeast({0x369cb80, 0x40019cd260}, {0x4004ef8e44, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x98
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0x4004ef8e44, 0x9, 0x4000000025?}, {0x369cb80?, 0x40019cd260?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:242 +0x58
golang.org/x/net/http2.(*Framer).ReadFrameHeader(0x4004ef8e00)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:505 +0x60
golang.org/x/net/http2.(*Framer).ReadFrame(0x4004ef8e00)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:564 +0x20
golang.org/x/net/http2.(*clientConnReadLoop).run(0x40000d2f98)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2208 +0xb8
golang.org/x/net/http2.(*ClientConn).readLoop(0x4001551180)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2077 +0x4c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6358
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:866 +0xa90

                                                
                                                
goroutine 1004 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001584480, 0x400148d180)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1003
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 168 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40002d43d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40002d43c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000347620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400138eee0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x40000a56a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x4004f46f38, {0x369e520, 0x40015ed2c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x40015ed2c0?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015f70c0, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 169 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x4001317f40, 0x40015a2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0xe0?, 0x4001317f40, 0x4001317f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 170 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 169
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4220 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40019de710, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019de700)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013cefc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002b3a40?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x4001db5f38, {0x369e520, 0x40015e66f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40015e66f0?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019c2d30, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4217
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3780 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f92180, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3775
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3714 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40001bde50, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40001bde40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f93080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40016fdf80?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x40015926a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x40000d8f38, {0x369e520, 0x4001313f20}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40015927a8?, {0x369e520?, 0x4001313f20?}, 0x40?, 0x40014b5380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400151ac50, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3731
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3731 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f93080, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3678
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3760 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40017a6ad0, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40017a6ac0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f92180)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001926a80?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x40015916f8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x400141ef38, {0x369e520, 0x40018261b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x40018261b0?}, 0xa0?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40014a4d90, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3457 [chan receive, 28 minutes]:
testing.(*T).Run(0x400206aa80, {0x296eb91?, 0x0?}, 0x4001806180)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x400206aa80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x400206aa80, 0x40017a61c0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3453
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5117 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5116
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4792 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f93620, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4787
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4781 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40006ec610, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006ec600)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f93620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400182a930?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x400131bea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x40000d6f38, {0x369e520, 0x4001309770}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4001309770?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40017d9860, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4792
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 805 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40014fc000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 804
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1472 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1471
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1247 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0xffff63d95c00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400178ef00?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400178ef00)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400178ef00)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40017a76c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40017a76c0)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001770700, {0x36d4000, 0x40017a76c0})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001770700)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1245
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3149 [chan receive, 38 minutes]:
testing.(*T).Run(0x400160a380, {0x296d71f?, 0xd2a4eecc496?}, 0x400177e5d0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x400160a380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x400160a380, 0x339baf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 618 [IO wait, 114 minutes]:
internal/poll.runtime_pollWait(0xffff641e2000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000784a00?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4000784a00)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4000784a00)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40001bdd80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40001bdd80)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004d2100, {0x36d4000, 0x40001bdd80})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004d2100)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 616
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 806 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001274f60, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 804
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4217 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013cefc0, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4215
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1160 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x400189e300, 0x40016fdf10)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 753
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6449 [select]:
os/exec.(*Cmd).watchCtx(0x40019db680, 0x4001bdc070)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6430
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1471 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x400131bf40, 0x400159ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0x84?, 0x400131bf40, 0x400131bf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x400189e900?, 0x40004a6500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400025ad80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1493
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4783 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4782
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1470 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40017a7c50, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40017a7c40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000347500)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000320770?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x40015a5f38, {0x369e520, 0x40019a2db0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40019a2db0?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001912b20, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1493
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1492 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400025ac00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1491
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3453 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x400206a1c0, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3219
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5115 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x400163b4d0, 0xf)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400163b4c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013cf020)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001594e88?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x4001594ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x4001439f38, {0x369e520, 0x40019a2de0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x40019a2de0?}, 0x0?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001570db0, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5113
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3793 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x4002048740, 0x4004f47f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0xa0?, 0x4002048740, 0x4002048788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x400181eeb0?, 0x40004a6140?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40014b4a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5711 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e65a8, 0x4001dbf9a0}, {0x36d4660, 0x4001dc18c0}, 0x1, 0x0, 0x4001459b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6618?, 0x40004dce00?}, 0x3b9aca00, 0x4000275d28?, 0x1, 0x4000275b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6618, 0x40004dce00}, 0x400206ba40, {0x400180e3c0, 0x11}, {0x29941e1, 0x14}, {0x29ac150, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36e6618, 0x40004dce00}, 0x400206ba40, {0x400180e3c0, 0x11}, {0x29786f9?, 0x17cab46800161e84?}, {0x6940f6ff?, 0x400187bf58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:285 +0xd4
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x400206ba40?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x400206ba40, 0x400159c000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3986
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3986 [chan receive, 3 minutes]:
testing.(*T).Run(0x40017e9340, {0x2994231?, 0x40000006ee?}, 0x400159c000)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40017e9340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40017e9340, 0x4001806180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3457
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3794 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3793
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5112 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013cc700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4221 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x4001318740, 0x4001318788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0x88?, 0x4001318740, 0x4001318788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x400025a600?, 0x40004a7a40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40017e9880?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4217
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 967 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x400025af00, 0x40015587e0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 966
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 796 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40019de8d0, 0x2c)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40019de8c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001274f60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002b55e0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x4004f43f38, {0x369e520, 0x400128ae10}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x400128ae10?}, 0xb0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004ef1010, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 806
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1493 [chan receive, 82 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000347500, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1491
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 798 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 797
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4222 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4221
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 797 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x400158ef40, 0x40015a0f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0x30?, 0x400158ef40, 0x400158ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x4e2e207865646e69?, 0x65536b726f777465?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400072e480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 806
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3730 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40006a3680?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3678
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6430 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x11, 0x400126b948, 0x4, 0x4002041a70, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x400126baa8?, 0x1929a0?, 0x40016ce150?, 0x0?, 0x4001807c00?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4001630400)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x400126ba78?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x40019db680)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x40019db680)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x400206efc0, 0x40019db680)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.5.1()
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:175 +0xdc
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x24
github.com/cenkalti/backoff/v4.doRetryNotify[...](0x400126be28?, {0x36c0ab8, 0x4001b06d80}, 0x339cc00, {0x0, 0x0?})
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0xcc
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x36c0ab8?, 0x4001b06d80?}, 0x161e83?, {0x0?, 0x0?})
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x58
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0x4001875ee8, 0x3b9aca00, 0x53d1ac1000, {0x0?, 0x400208ea88?, 0x161e83?})
	/home/jenkins/workspace/Build_Cross/pkg/util/retry/retry.go:60 +0xdc
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.5(0x400206efc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:180 +0x94
testing.tRunner(0x400206efc0, 0x40019b5020)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3439
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1835 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x400025ad80, 0x4001a82a80)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1834
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1933 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40014b4600, 0x4001558930)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1418
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3439 [chan receive]:
testing.(*T).Run(0x40013cc380, {0x296be44?, 0x368adf0?}, 0x40019b5020)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40013cc380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:170 +0x798
testing.tRunner(0x40013cc380, 0x40014fe200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3466
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1205 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0x40017997a0)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1203
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 1206 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0x40017997a0)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1203
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 3219 [chan receive, 38 minutes]:
testing.(*T).Run(0x400160a700, {0x296d71f?, 0x40015a4f58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x400160a700)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x400160a700, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5116 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x40016c7740, 0x40016c7788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0xc9?, 0x40016c7740, 0x40016c7788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x0?, 0x40016c7750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40013cc700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5113
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1862 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x400025b500, 0x4001a83490)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1861
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3716 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3715
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5113 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013cf020, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3715 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x4001317740, 0x4001317788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0x99?, 0x4001317740, 0x4001317788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x0?, 0x4001317750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40006a3680?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3731
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4791 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40014b4a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4787
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6375 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400206ea80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6374
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5411 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001a50f90, 0xc)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001a50f80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013ce5a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001a76310?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x4002046ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x4001879f38, {0x369e520, 0x4001826090}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4001826090?}, 0x40?, 0x400025b800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001996f70, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5407
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4782 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x4002047f40, 0x4002047f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0xe9?, 0x4002047f40, 0x4002047f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x0?, 0x4002047f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40014b4a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4792
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6080 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4002064480, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6073
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6381 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6380
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6376 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001aebe60, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6374
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6431 [IO wait]:
internal/poll.runtime_pollWait(0xffff63d96000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40019c5080?, 0x4001853c00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40019c5080, {0x4001853c00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4000499980, {0x4001853c00?, 0x400204c548?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40019b51a0, {0x369c8e8, 0x40004656a8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x40019b51a0}, {0x369c8e8, 0x40004656a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4000499980?, {0x369cae0, 0x40019b51a0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4000499980, {0x369cae0, 0x40019b51a0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x40019b51a0}, {0x369c968, 0x4000499980}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001825680?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6430
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5767 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013ce360, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5762
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5766 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40015dad80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5762
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6058 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40020128d0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40020128c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4002064480)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001688000?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x400177b6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x400187bf38, {0x369e520, 0x400160eb40}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x400160eb40?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4002080f10, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6080
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6432 [IO wait]:
internal/poll.runtime_pollWait(0xffff63d95800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40019c5140?, 0x4001853e00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40019c5140, {0x4001853e00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40004999a0, {0x4001853e00?, 0x40016c1548?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40019b51d0, {0x369c8e8, 0x40004656b0})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x40019b51d0}, {0x369c8e8, 0x40004656b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40004999a0?, {0x369cae0, 0x40019b51d0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40004999a0, {0x369cae0, 0x40019b51d0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x40019b51d0}, {0x369c968, 0x40004999a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400206efc0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6430
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 6380 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x40016c5740, 0x40016c5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0x0?, 0x40016c5740, 0x40016c5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x36e6618?, 0x40014cb490?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40014cb3b0?, 0x0?, 0x400206e700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6376
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6059 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x40016dff40, 0x40016dff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0x20?, 0x40016dff40, 0x40016dff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x400007c2d0?, 0x25?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001b0d800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6080
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5406 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013cd180?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5405
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6060 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6059
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5407 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013ce5a0, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5405
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5771 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x40016c5f40, 0x40016c5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0x0?, 0x40016c5f40, 0x40016c5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x36e6618?, 0x4001a83420?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001a83340?, 0x0?, 0x400025b200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5767
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5770 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4001a50810, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001a50800)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013ce360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001829a40?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x77772f2f3a7370?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x400159ff38, {0x369e520, 0x40018032c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x40018032c0?}, 0x0?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400151ac20, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5767
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5412 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082150}, 0x40016c7f40, 0x40016c7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082150}, 0xe3?, 0x40016c7f40, 0x40016c7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082150?}, 0x0?, 0x40016c7f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x40013cd180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5407
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5413 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5412
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5772 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5771
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6379 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40002d5b90, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40002d5b80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001aebe60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004ce2a0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082150}, 0x40015a1f38, {0x369e520, 0x40012e45d0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40012e45d0?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015710e0, 0x3b9aca00, 0x0, 0x1, 0x4000082150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6376
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                    

Test pass (240/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.69
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 4.33
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.51
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 139.82
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 10.82
57 TestAddons/StoppedEnableDisable 12.65
58 TestCertOptions 42.37
59 TestCertExpiration 234.42
61 TestForceSystemdFlag 40.2
62 TestForceSystemdEnv 35.69
67 TestErrorSpam/setup 33.19
68 TestErrorSpam/start 0.81
69 TestErrorSpam/status 1.18
70 TestErrorSpam/pause 6.54
71 TestErrorSpam/unpause 5.69
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 77.04
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 27.99
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
84 TestFunctional/serial/CacheCmd/cache/add_local 1.29
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 37.91
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.49
95 TestFunctional/serial/LogsFileCmd 1.49
96 TestFunctional/serial/InvalidService 4.16
98 TestFunctional/parallel/ConfigCmd 0.61
99 TestFunctional/parallel/DashboardCmd 10.04
100 TestFunctional/parallel/DryRun 0.51
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 1.06
106 TestFunctional/parallel/ServiceCmdConnect 7.6
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 19.49
110 TestFunctional/parallel/SSHCmd 0.73
111 TestFunctional/parallel/CpCmd 2.06
113 TestFunctional/parallel/FileSync 0.3
114 TestFunctional/parallel/CertSync 1.73
118 TestFunctional/parallel/NodeLabels 0.09
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.85
122 TestFunctional/parallel/License 0.27
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
125 TestFunctional/parallel/Version/short 0.07
126 TestFunctional/parallel/Version/components 0.9
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.56
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
134 TestFunctional/parallel/ImageCommands/ImageBuild 4.69
135 TestFunctional/parallel/ImageCommands/Setup 0.79
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.1
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/MountCmd/any-port 8.31
153 TestFunctional/parallel/MountCmd/specific-port 1.7
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.38
155 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
157 TestFunctional/parallel/ProfileCmd/profile_list 0.43
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
159 TestFunctional/parallel/ServiceCmd/List 1.45
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.38
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
162 TestFunctional/parallel/ServiceCmd/Format 0.48
163 TestFunctional/parallel/ServiceCmd/URL 0.45
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.08
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.41
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.12
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.8
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.93
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.02
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.69
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.22
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.73
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.33
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.37
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.84
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.96
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.66
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.19
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.81
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.09
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.8
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.45
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.17
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 199.26
265 TestMultiControlPlane/serial/DeployApp 7.49
266 TestMultiControlPlane/serial/PingHostFromPods 1.58
267 TestMultiControlPlane/serial/AddWorkerNode 60.01
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
270 TestMultiControlPlane/serial/CopyFile 20.19
271 TestMultiControlPlane/serial/StopSecondaryNode 12.86
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
273 TestMultiControlPlane/serial/RestartSecondaryNode 20.35
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.13
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 209.02
276 TestMultiControlPlane/serial/DeleteSecondaryNode 30.79
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
278 TestMultiControlPlane/serial/StopCluster 36.06
279 TestMultiControlPlane/serial/RestartCluster 83.83
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
281 TestMultiControlPlane/serial/AddSecondaryNode 82.25
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
287 TestJSONOutput/start/Command 80.43
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.84
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 43.08
313 TestKicCustomNetwork/use_default_bridge_network 35.72
314 TestKicExistingNetwork 37.64
315 TestKicCustomSubnet 37.72
316 TestKicStaticIP 36.94
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 73.45
321 TestMountStart/serial/StartWithMountFirst 8.99
322 TestMountStart/serial/VerifyMountFirst 0.27
323 TestMountStart/serial/StartWithMountSecond 8.67
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.94
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 137.12
333 TestMultiNode/serial/DeployApp2Nodes 4.93
334 TestMultiNode/serial/PingHostFrom2Pods 0.93
335 TestMultiNode/serial/AddNode 54.51
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.35
339 TestMultiNode/serial/StopNode 2.38
340 TestMultiNode/serial/StartAfterStop 7.98
341 TestMultiNode/serial/RestartKeepsNodes 74.47
342 TestMultiNode/serial/DeleteNode 5.57
343 TestMultiNode/serial/StopMultiNode 24.08
344 TestMultiNode/serial/RestartMultiNode 50.05
345 TestMultiNode/serial/ValidateNameConflict 35.88
350 TestPreload 120.89
352 TestScheduledStopUnix 111.55
355 TestInsufficientStorage 13.02
356 TestRunningBinaryUpgrade 299.97
359 TestMissingContainerUpgrade 119.98
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
363 TestPause/serial/Start 88.63
364 TestNoKubernetes/serial/StartWithK8s 45.92
365 TestNoKubernetes/serial/StartWithStopK8s 27.84
366 TestNoKubernetes/serial/Start 8.92
367 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
368 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
369 TestNoKubernetes/serial/ProfileList 1.1
370 TestNoKubernetes/serial/Stop 1.35
371 TestNoKubernetes/serial/StartNoArgs 6.93
372 TestPause/serial/SecondStartNoReconfiguration 31.65
373 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
375 TestStoppedBinaryUpgrade/Setup 1.81
376 TestStoppedBinaryUpgrade/Upgrade 63.45
377 TestStoppedBinaryUpgrade/MinikubeLogs 1.56
x
+
TestDownloadOnly/v1.28.0/json-events (8.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-229746 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-229746 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.689196215s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 04:10:49.467109  441727 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1216 04:10:49.467193  441727 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-229746
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-229746: exit status 85 (100.099681ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-229746 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-229746 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:10:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:10:40.826015  441733 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:10:40.826748  441733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:10:40.826795  441733 out.go:374] Setting ErrFile to fd 2...
	I1216 04:10:40.826814  441733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:10:40.827159  441733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	W1216 04:10:40.827356  441733 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22158-438353/.minikube/config/config.json: open /home/jenkins/minikube-integration/22158-438353/.minikube/config/config.json: no such file or directory
	I1216 04:10:40.827867  441733 out.go:368] Setting JSON to true
	I1216 04:10:40.828758  441733 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10387,"bootTime":1765847854,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:10:40.828865  441733 start.go:143] virtualization:  
	I1216 04:10:40.834768  441733 out.go:99] [download-only-229746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1216 04:10:40.834993  441733 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 04:10:40.835105  441733 notify.go:221] Checking for updates...
	I1216 04:10:40.838903  441733 out.go:171] MINIKUBE_LOCATION=22158
	I1216 04:10:40.842434  441733 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:10:40.845701  441733 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:10:40.848998  441733 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:10:40.852103  441733 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 04:10:40.858416  441733 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:10:40.858682  441733 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:10:40.891798  441733 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:10:40.891909  441733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:10:40.954192  441733 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-16 04:10:40.945060382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:10:40.954304  441733 docker.go:319] overlay module found
	I1216 04:10:40.957476  441733 out.go:99] Using the docker driver based on user configuration
	I1216 04:10:40.957547  441733 start.go:309] selected driver: docker
	I1216 04:10:40.957559  441733 start.go:927] validating driver "docker" against <nil>
	I1216 04:10:40.957688  441733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:10:41.018484  441733 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-16 04:10:41.008327659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:10:41.018643  441733 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:10:41.018937  441733 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 04:10:41.019106  441733 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:10:41.022381  441733 out.go:171] Using Docker driver with root privileges
	I1216 04:10:41.025440  441733 cni.go:84] Creating CNI manager for ""
	I1216 04:10:41.025532  441733 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 04:10:41.025547  441733 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 04:10:41.025632  441733 start.go:353] cluster config:
	{Name:download-only-229746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-229746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:10:41.028730  441733 out.go:99] Starting "download-only-229746" primary control-plane node in "download-only-229746" cluster
	I1216 04:10:41.028760  441733 cache.go:134] Beginning downloading kic base image for docker with crio
	I1216 04:10:41.031655  441733 out.go:99] Pulling base image v0.0.48-1765575274-22117 ...
	I1216 04:10:41.031702  441733 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:10:41.031765  441733 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
	I1216 04:10:41.051750  441733 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
	I1216 04:10:41.051776  441733 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 04:10:41.051920  441733 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local cache directory
	I1216 04:10:41.052023  441733 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb to local cache
	I1216 04:10:41.092203  441733 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:10:41.092232  441733 cache.go:65] Caching tarball of preloaded images
	I1216 04:10:41.092406  441733 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:10:41.095819  441733 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1216 04:10:41.095855  441733 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1216 04:10:41.179886  441733 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1216 04:10:41.180018  441733 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1216 04:10:45.237607  441733 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1216 04:10:45.238020  441733 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/download-only-229746/config.json ...
	I1216 04:10:45.238061  441733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/download-only-229746/config.json: {Name:mk7d284f31950b113f7abf11affec82843328b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:10:45.238308  441733 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:10:45.238569  441733 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22158-438353/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-229746 host does not exist
	  To start a cluster, run: "minikube start -p download-only-229746"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-229746
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-218041 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-218041 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.331154556s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 04:10:54.266707  441727 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 04:10:54.266745  441727 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-218041
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-218041: exit status 85 (90.89747ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-229746 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-229746 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-229746                                                                                                                                                   │ download-only-229746 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ start   │ -o=json --download-only -p download-only-218041 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-218041 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:10:49
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:10:49.982567  441931 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:10:49.982696  441931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:10:49.982712  441931 out.go:374] Setting ErrFile to fd 2...
	I1216 04:10:49.982717  441931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:10:49.982952  441931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:10:49.983356  441931 out.go:368] Setting JSON to true
	I1216 04:10:49.984126  441931 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10396,"bootTime":1765847854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:10:49.984195  441931 start.go:143] virtualization:  
	I1216 04:10:49.987688  441931 out.go:99] [download-only-218041] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:10:49.987978  441931 notify.go:221] Checking for updates...
	I1216 04:10:49.991822  441931 out.go:171] MINIKUBE_LOCATION=22158
	I1216 04:10:49.994875  441931 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:10:49.997855  441931 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:10:50.000718  441931 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:10:50.004846  441931 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 04:10:50.017130  441931 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:10:50.017608  441931 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:10:50.044005  441931 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:10:50.044127  441931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:10:50.105047  441931 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:10:50.094119652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:10:50.105338  441931 docker.go:319] overlay module found
	I1216 04:10:50.108543  441931 out.go:99] Using the docker driver based on user configuration
	I1216 04:10:50.108587  441931 start.go:309] selected driver: docker
	I1216 04:10:50.108594  441931 start.go:927] validating driver "docker" against <nil>
	I1216 04:10:50.108710  441931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:10:50.179263  441931 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:10:50.169762143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:10:50.179426  441931 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:10:50.179724  441931 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 04:10:50.179887  441931 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:10:50.183014  441931 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-218041 host does not exist
	  To start a cluster, run: "minikube start -p download-only-218041"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-218041
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-956467 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-956467 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.509996618s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 04:10:59.233312  441727 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1216 04:10:59.233354  441727 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-956467
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-956467: exit status 85 (84.283107ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-229746 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-229746 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-229746                                                                                                                                                          │ download-only-229746 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ start   │ -o=json --download-only -p download-only-218041 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-218041 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ delete  │ -p download-only-218041                                                                                                                                                          │ download-only-218041 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │ 16 Dec 25 04:10 UTC │
	│ start   │ -o=json --download-only -p download-only-956467 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-956467 │ jenkins │ v1.37.0 │ 16 Dec 25 04:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:10:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:10:54.769429  442128 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:10:54.769557  442128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:10:54.769569  442128 out.go:374] Setting ErrFile to fd 2...
	I1216 04:10:54.769575  442128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:10:54.769847  442128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:10:54.770262  442128 out.go:368] Setting JSON to true
	I1216 04:10:54.771075  442128 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10401,"bootTime":1765847854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:10:54.771144  442128 start.go:143] virtualization:  
	I1216 04:10:54.774576  442128 out.go:99] [download-only-956467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:10:54.774796  442128 notify.go:221] Checking for updates...
	I1216 04:10:54.777776  442128 out.go:171] MINIKUBE_LOCATION=22158
	I1216 04:10:54.781306  442128 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:10:54.784254  442128 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:10:54.787016  442128 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:10:54.789909  442128 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 04:10:54.795618  442128 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:10:54.795907  442128 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:10:54.826548  442128 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:10:54.826662  442128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:10:54.889867  442128 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:10:54.875529175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:10:54.889986  442128 docker.go:319] overlay module found
	I1216 04:10:54.893126  442128 out.go:99] Using the docker driver based on user configuration
	I1216 04:10:54.893175  442128 start.go:309] selected driver: docker
	I1216 04:10:54.893183  442128 start.go:927] validating driver "docker" against <nil>
	I1216 04:10:54.893300  442128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:10:54.955040  442128 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-16 04:10:54.945992878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:10:54.955204  442128 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:10:54.955474  442128 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1216 04:10:54.955619  442128 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:10:54.958907  442128 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-956467 host does not exist
	  To start a cluster, run: "minikube start -p download-only-956467"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-956467
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 04:11:00.748733  441727 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-260964 --alsologtostderr --binary-mirror http://127.0.0.1:39905 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-260964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-260964
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-266389
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-266389: exit status 85 (70.728231ms)

                                                
                                                
-- stdout --
	* Profile "addons-266389" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-266389"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-266389
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-266389: exit status 85 (74.222426ms)

                                                
                                                
-- stdout --
	* Profile "addons-266389" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-266389"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (139.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-266389 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-266389 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.821444962s)
--- PASS: TestAddons/Setup (139.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-266389 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-266389 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.82s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-266389 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-266389 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7277adc3-8a48-43b3-97ad-b0f6c2dc3c95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7277adc3-8a48-43b3-97ad-b0f6c2dc3c95] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003788901s
addons_test.go:696: (dbg) Run:  kubectl --context addons-266389 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-266389 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-266389 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-266389 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-266389
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-266389: (12.234872425s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-266389
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-266389
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-266389
--- PASS: TestAddons/StoppedEnableDisable (12.65s)

                                                
                                    
x
+
TestCertOptions (42.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-888180 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-888180 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.663972243s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-888180 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-888180 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-888180 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-888180" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-888180
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-888180: (2.714491858s)
--- PASS: TestCertOptions (42.37s)

                                                
                                    
x
+
TestCertExpiration (234.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-096436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-096436 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.32263465s)
E1216 05:35:24.308422  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-096436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-096436 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.621831081s)
helpers_test.go:176: Cleaning up "cert-expiration-096436" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-096436
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-096436: (2.472780263s)
--- PASS: TestCertExpiration (234.42s)

                                                
                                    
x
+
TestForceSystemdFlag (40.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-037247 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-037247 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.433661954s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-037247 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-037247" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-037247
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-037247: (2.474984758s)
--- PASS: TestForceSystemdFlag (40.20s)

                                                
                                    
x
+
TestForceSystemdEnv (35.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-288672 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-288672 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.184542619s)
helpers_test.go:176: Cleaning up "force-systemd-env-288672" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-288672
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-288672: (2.508019817s)
--- PASS: TestForceSystemdEnv (35.69s)

                                                
                                    
x
+
TestErrorSpam/setup (33.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-555462 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-555462 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-555462 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-555462 --driver=docker  --container-runtime=crio: (33.190950009s)
--- PASS: TestErrorSpam/setup (33.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (6.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause: exit status 80 (1.83634368s)

                                                
                                                
-- stdout --
	* Pausing node nospam-555462 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause: exit status 80 (2.45068182s)

                                                
                                                
-- stdout --
	* Pausing node nospam-555462 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:17:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause: exit status 80 (2.255514535s)

                                                
                                                
-- stdout --
	* Pausing node nospam-555462 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause: exit status 80 (1.763365961s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-555462 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:17:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause: exit status 80 (1.921948917s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-555462 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:17:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause: exit status 80 (2.004857511s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-555462 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-16T04:17:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 stop: (1.320313872s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-555462 --log_dir /tmp/nospam-555462 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-861171 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1216 04:18:22.215392  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:22.221820  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:22.233264  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:22.254716  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:22.296192  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:22.377664  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:22.539175  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:22.860837  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:23.502303  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:24.783674  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:27.345210  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:32.466564  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:18:42.709410  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-861171 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.038998708s)
--- PASS: TestFunctional/serial/StartWithProxy (77.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 04:19:02.522043  441727 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-861171 --alsologtostderr -v=8
E1216 04:19:03.191672  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-861171 --alsologtostderr -v=8: (27.985219797s)
functional_test.go:678: soft start took 27.985724688s for "functional-861171" cluster.
I1216 04:19:30.507552  441727 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (27.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-861171 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 cache add registry.k8s.io/pause:3.1: (1.263249854s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 cache add registry.k8s.io/pause:3.3: (1.206557172s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 cache add registry.k8s.io/pause:latest: (1.118072463s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-861171 /tmp/TestFunctionalserialCacheCmdcacheadd_local3483626370/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cache add minikube-local-cache-test:functional-861171
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cache delete minikube-local-cache-test:functional-861171
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-861171
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (327.202733ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 kubectl -- --context functional-861171 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-861171 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-861171 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 04:19:44.153817  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-861171 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.902659201s)
functional_test.go:776: restart took 37.902754078s for "functional-861171" cluster.
I1216 04:20:16.138895  441727 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (37.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-861171 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 logs: (1.485559899s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 logs --file /tmp/TestFunctionalserialLogsFileCmd1768815060/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 logs --file /tmp/TestFunctionalserialLogsFileCmd1768815060/001/logs.txt: (1.485476429s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-861171 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-861171
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-861171: exit status 115 (378.143073ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30746 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-861171 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 config get cpus: exit status 14 (74.329643ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 config get cpus: exit status 14 (106.436531ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-861171 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-861171 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 468275: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (215.041694ms)

                                                
                                                
-- stdout --
	* [functional-861171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:20:56.810217  467760 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:20:56.810414  467760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:20:56.810446  467760 out.go:374] Setting ErrFile to fd 2...
	I1216 04:20:56.810466  467760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:20:56.810759  467760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:20:56.811169  467760 out.go:368] Setting JSON to false
	I1216 04:20:56.812171  467760 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11003,"bootTime":1765847854,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:20:56.812289  467760 start.go:143] virtualization:  
	I1216 04:20:56.815902  467760 out.go:179] * [functional-861171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:20:56.819773  467760 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:20:56.821052  467760 notify.go:221] Checking for updates...
	I1216 04:20:56.827124  467760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:20:56.830129  467760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:20:56.832980  467760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:20:56.836117  467760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:20:56.838974  467760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:20:56.842347  467760 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:20:56.842898  467760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:20:56.873243  467760 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:20:56.873417  467760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:20:56.939261  467760 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 04:20:56.929978771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:20:56.939365  467760 docker.go:319] overlay module found
	I1216 04:20:56.942582  467760 out.go:179] * Using the docker driver based on existing profile
	I1216 04:20:56.945735  467760 start.go:309] selected driver: docker
	I1216 04:20:56.945829  467760 start.go:927] validating driver "docker" against &{Name:functional-861171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-861171 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:20:56.946002  467760 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:20:56.950462  467760 out.go:203] 
	W1216 04:20:56.954402  467760 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 04:20:56.960505  467760 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-861171 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-861171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (179.954214ms)

                                                
                                                
-- stdout --
	* [functional-861171] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:20:58.377559  468100 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:20:58.377702  468100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:20:58.377728  468100 out.go:374] Setting ErrFile to fd 2...
	I1216 04:20:58.377747  468100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:20:58.378832  468100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:20:58.379262  468100 out.go:368] Setting JSON to false
	I1216 04:20:58.380159  468100 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11005,"bootTime":1765847854,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:20:58.380230  468100 start.go:143] virtualization:  
	I1216 04:20:58.383204  468100 out.go:179] * [functional-861171] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1216 04:20:58.384794  468100 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:20:58.384969  468100 notify.go:221] Checking for updates...
	I1216 04:20:58.387037  468100 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:20:58.388216  468100 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:20:58.389296  468100 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:20:58.390400  468100 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:20:58.391481  468100 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:20:58.393306  468100 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:20:58.394041  468100 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:20:58.422663  468100 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:20:58.422792  468100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:20:58.485954  468100 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-16 04:20:58.476502888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:20:58.486061  468100 docker.go:319] overlay module found
	I1216 04:20:58.487805  468100 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 04:20:58.489299  468100 start.go:309] selected driver: docker
	I1216 04:20:58.489322  468100 start.go:927] validating driver "docker" against &{Name:functional-861171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-861171 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:20:58.489430  468100 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:20:58.491387  468100 out.go:203] 
	W1216 04:20:58.492758  468100 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 04:20:58.494456  468100 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-861171 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-861171 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-4mbx2" [1b6e5094-820e-44f6-81bc-c675ffa0fd9a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-4mbx2" [1b6e5094-820e-44f6-81bc-c675ffa0fd9a] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00369795s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30951
functional_test.go:1680: http://192.168.49.2:30951: success! body:
Request served by hello-node-connect-7d85dfc575-4mbx2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30951
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [a14acb24-d5d0-4ac7-b3ec-6916697f3bb7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003452214s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-861171 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-861171 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-861171 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-861171 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ee8cb582-a7ac-447e-a02f-40f7fe81cc0a] Pending
helpers_test.go:353: "sp-pod" [ee8cb582-a7ac-447e-a02f-40f7fe81cc0a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00384364s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-861171 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-861171 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-861171 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c0378ff8-2570-4385-b09d-c265aca5cbcf] Pending
helpers_test.go:353: "sp-pod" [c0378ff8-2570-4385-b09d-c265aca5cbcf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003268729s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-861171 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh -n functional-861171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cp functional-861171:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1582164488/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh -n functional-861171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh -n functional-861171 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/441727/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo cat /etc/test/nested/copy/441727/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/441727.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo cat /etc/ssl/certs/441727.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/441727.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo cat /usr/share/ca-certificates/441727.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4417272.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo cat /etc/ssl/certs/4417272.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4417272.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo cat /usr/share/ca-certificates/4417272.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-861171 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 ssh "sudo systemctl is-active docker": exit status 1 (375.125419ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 ssh "sudo systemctl is-active containerd": exit status 1 (477.926886ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-861171 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-861171 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-861171 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 463860: os: process already finished
helpers_test.go:520: unable to terminate pid 463674: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-861171 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-861171 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-861171 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [b276b68a-afd4-4d58-836b-fec7bdc5f42c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [b276b68a-afd4-4d58-836b-fec7bdc5f42c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004541227s
I1216 04:20:33.864334  441727 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-861171 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-861171
localhost/kicbase/echo-server:functional-861171
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-861171 image ls --format short --alsologtostderr:
I1216 04:21:05.551285  468830 out.go:360] Setting OutFile to fd 1 ...
I1216 04:21:05.551403  468830 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:05.551411  468830 out.go:374] Setting ErrFile to fd 2...
I1216 04:21:05.551416  468830 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:05.551758  468830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:21:05.552689  468830 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:05.552834  468830 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:05.553541  468830 cli_runner.go:164] Run: docker container inspect functional-861171 --format={{.State.Status}}
I1216 04:21:05.577820  468830 ssh_runner.go:195] Run: systemctl --version
I1216 04:21:05.577885  468830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-861171
I1216 04:21:05.600820  468830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-861171/id_rsa Username:docker}
I1216 04:21:05.699642  468830 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-861171 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-861171  │ ce2d2cda2d858 │ 4.79MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ localhost/minikube-local-cache-test     │ functional-861171  │ 37fa4d23cdd0f │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-861171 image ls --format table --alsologtostderr:
I1216 04:21:08.810933  469122 out.go:360] Setting OutFile to fd 1 ...
I1216 04:21:08.811095  469122 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:08.811105  469122 out.go:374] Setting ErrFile to fd 2...
I1216 04:21:08.811109  469122 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:08.811349  469122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:21:08.811945  469122 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:08.812069  469122 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:08.812572  469122 cli_runner.go:164] Run: docker container inspect functional-861171 --format={{.State.Status}}
I1216 04:21:08.833273  469122 ssh_runner.go:195] Run: systemctl --version
I1216 04:21:08.833334  469122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-861171
I1216 04:21:08.855881  469122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-861171/id_rsa Username:docker}
I1216 04:21:08.960226  469122 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-861171 image ls --format json --alsologtostderr:
[{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc9172
9d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-861171"],"size":"4788229"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e9
82356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTa
gs":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"138784d8
7c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repo
Tags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"37fa4d23cdd0fd4520bc18cc4baf09518b21dab32ed4c0773492b94c7b3092c4","repoDigests":["localhost/minikube-local-cache-test@sha256:c438845dc3485fc01d9120817b748c94d6e5d77bef41fe1b6217a52166fd78c4"],"rep
oTags":["localhost/minikube-local-cache-test:functional-861171"],"size":"3328"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-861171 image ls --format json --alsologtostderr:
I1216 04:21:08.584582  469086 out.go:360] Setting OutFile to fd 1 ...
I1216 04:21:08.584713  469086 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:08.584726  469086 out.go:374] Setting ErrFile to fd 2...
I1216 04:21:08.584733  469086 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:08.585139  469086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:21:08.585914  469086 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:08.586057  469086 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:08.586663  469086 cli_runner.go:164] Run: docker container inspect functional-861171 --format={{.State.Status}}
I1216 04:21:08.605644  469086 ssh_runner.go:195] Run: systemctl --version
I1216 04:21:08.605724  469086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-861171
I1216 04:21:08.623052  469086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-861171/id_rsa Username:docker}
I1216 04:21:08.724609  469086 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-861171 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 37fa4d23cdd0fd4520bc18cc4baf09518b21dab32ed4c0773492b94c7b3092c4
repoDigests:
- localhost/minikube-local-cache-test@sha256:c438845dc3485fc01d9120817b748c94d6e5d77bef41fe1b6217a52166fd78c4
repoTags:
- localhost/minikube-local-cache-test:functional-861171
size: "3328"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-861171
size: "4788229"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-861171 image ls --format yaml --alsologtostderr:
I1216 04:21:05.819108  468879 out.go:360] Setting OutFile to fd 1 ...
I1216 04:21:05.819580  468879 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:05.819632  468879 out.go:374] Setting ErrFile to fd 2...
I1216 04:21:05.819651  468879 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:05.819928  468879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:21:05.820563  468879 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:05.820722  468879 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:05.822843  468879 cli_runner.go:164] Run: docker container inspect functional-861171 --format={{.State.Status}}
I1216 04:21:05.861244  468879 ssh_runner.go:195] Run: systemctl --version
I1216 04:21:05.861298  468879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-861171
I1216 04:21:05.894953  468879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-861171/id_rsa Username:docker}
I1216 04:21:06.000040  468879 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh pgrep buildkitd
E1216 04:21:06.075862  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 ssh pgrep buildkitd: exit status 1 (399.870667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr
2025/12/16 04:21:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr: (4.054750223s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d4bd55d6b24
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-861171
--> 4db117a5d7f
Successfully tagged localhost/my-image:functional-861171
4db117a5d7f5fb507cadaca94f9a191697ff815e9b2391883141ea6059c4d158
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-861171 image build -t localhost/my-image:functional-861171 testdata/build --alsologtostderr:
I1216 04:21:06.560430  468999 out.go:360] Setting OutFile to fd 1 ...
I1216 04:21:06.561286  468999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:06.561327  468999 out.go:374] Setting ErrFile to fd 2...
I1216 04:21:06.561347  468999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:21:06.561675  468999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:21:06.562413  468999 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:06.563087  468999 config.go:182] Loaded profile config "functional-861171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:21:06.563661  468999 cli_runner.go:164] Run: docker container inspect functional-861171 --format={{.State.Status}}
I1216 04:21:06.601384  468999 ssh_runner.go:195] Run: systemctl --version
I1216 04:21:06.601455  468999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-861171
I1216 04:21:06.622643  468999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-861171/id_rsa Username:docker}
I1216 04:21:06.764256  468999 build_images.go:162] Building image from path: /tmp/build.1095937476.tar
I1216 04:21:06.764375  468999 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 04:21:06.779785  468999 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1095937476.tar
I1216 04:21:06.784397  468999 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1095937476.tar: stat -c "%s %y" /var/lib/minikube/build/build.1095937476.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1095937476.tar': No such file or directory
I1216 04:21:06.784475  468999 ssh_runner.go:362] scp /tmp/build.1095937476.tar --> /var/lib/minikube/build/build.1095937476.tar (3072 bytes)
I1216 04:21:06.849159  468999 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1095937476
I1216 04:21:06.872659  468999 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1095937476 -xf /var/lib/minikube/build/build.1095937476.tar
I1216 04:21:06.890095  468999 crio.go:315] Building image: /var/lib/minikube/build/build.1095937476
I1216 04:21:06.890218  468999 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-861171 /var/lib/minikube/build/build.1095937476 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1216 04:21:10.478224  468999 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-861171 /var/lib/minikube/build/build.1095937476 --cgroup-manager=cgroupfs: (3.587956058s)
I1216 04:21:10.478292  468999 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1095937476
I1216 04:21:10.486251  468999 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1095937476.tar
I1216 04:21:10.494105  468999 build_images.go:218] Built localhost/my-image:functional-861171 from /tmp/build.1095937476.tar
I1216 04:21:10.494144  468999 build_images.go:134] succeeded building to: functional-861171
I1216 04:21:10.494150  468999 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-861171
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image load --daemon kicbase/echo-server:functional-861171 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 image load --daemon kicbase/echo-server:functional-861171 --alsologtostderr: (1.843396546s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image load --daemon kicbase/echo-server:functional-861171 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-861171
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image load --daemon kicbase/echo-server:functional-861171 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image save kicbase/echo-server:functional-861171 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image rm kicbase/echo-server:functional-861171 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-861171
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 image save --daemon kicbase/echo-server:functional-861171 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-861171
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-861171 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.214.36 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-861171 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdany-port2077647623/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765858834854501586" to /tmp/TestFunctionalparallelMountCmdany-port2077647623/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765858834854501586" to /tmp/TestFunctionalparallelMountCmdany-port2077647623/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765858834854501586" to /tmp/TestFunctionalparallelMountCmdany-port2077647623/001/test-1765858834854501586
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (478.496215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:20:35.333286  441727 retry.go:31] will retry after 662.414513ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 04:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 04:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 04:20 test-1765858834854501586
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh cat /mount-9p/test-1765858834854501586
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-861171 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c19e6d1d-e49c-4130-8d1a-514d21f30a13] Pending
helpers_test.go:353: "busybox-mount" [c19e6d1d-e49c-4130-8d1a-514d21f30a13] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c19e6d1d-e49c-4130-8d1a-514d21f30a13] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c19e6d1d-e49c-4130-8d1a-514d21f30a13] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00371461s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-861171 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdany-port2077647623/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdspecific-port2951125742/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.838185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:20:43.521289  441727 retry.go:31] will retry after 272.519129ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdspecific-port2951125742/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-861171 ssh "sudo umount -f /mount-9p": exit status 1 (284.44995ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-861171 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdspecific-port2951125742/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup258763434/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup258763434/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup258763434/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-861171 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup258763434/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup258763434/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-861171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup258763434/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-861171 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-861171 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-hv7h2" [b72180d3-2e4e-4dc4-a610-47eae2a77519] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-hv7h2" [b72180d3-2e4e-4dc4-a610-47eae2a77519] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.027271839s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "368.709689ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.216569ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "349.215857ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.755582ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 service list: (1.448400957s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-861171 service list -o json: (1.377571073s)
functional_test.go:1504: Took "1.377652642s" to run "out/minikube-linux-arm64 -p functional-861171 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32533
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-861171 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32533
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-861171
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-861171
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-861171
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-438353/.minikube/files/etc/test/nested/copy/441727/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 cache add registry.k8s.io/pause:3.1: (1.157117431s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 cache add registry.k8s.io/pause:3.3: (1.164160318s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 cache add registry.k8s.io/pause:latest: (1.091426276s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach4261790202/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cache add minikube-local-cache-test:functional-763073
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cache delete minikube-local-cache-test:functional-763073
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-763073
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.682565ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1931304802/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1931304802/001/logs.txt: (1.020823546s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 config get cpus: exit status 14 (75.678804ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 config get cpus: exit status 14 (63.629584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (185.360943ms)

                                                
                                                
-- stdout --
	* [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:50:19.090209  498787 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:50:19.090419  498787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.090433  498787 out.go:374] Setting ErrFile to fd 2...
	I1216 04:50:19.090439  498787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:19.090717  498787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:50:19.091128  498787 out.go:368] Setting JSON to false
	I1216 04:50:19.091995  498787 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12765,"bootTime":1765847854,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:50:19.092067  498787 start.go:143] virtualization:  
	I1216 04:50:19.095423  498787 out.go:179] * [functional-763073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1216 04:50:19.099226  498787 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:50:19.099371  498787 notify.go:221] Checking for updates...
	I1216 04:50:19.104957  498787 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:50:19.107850  498787 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:50:19.110753  498787 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:50:19.113596  498787 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:50:19.116471  498787 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:50:19.119722  498787 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:50:19.120403  498787 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:50:19.146589  498787 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:50:19.146757  498787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.207344  498787 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.198349203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.207457  498787 docker.go:319] overlay module found
	I1216 04:50:19.210522  498787 out.go:179] * Using the docker driver based on existing profile
	I1216 04:50:19.213288  498787 start.go:309] selected driver: docker
	I1216 04:50:19.213366  498787 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.213467  498787 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:50:19.217038  498787 out.go:203] 
	W1216 04:50:19.220063  498787 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 04:50:19.222898  498787 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-763073 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-763073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (195.660427ms)

                                                
                                                
-- stdout --
	* [functional-763073] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:50:18.901893  498740 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:50:18.902038  498740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:18.902050  498740 out.go:374] Setting ErrFile to fd 2...
	I1216 04:50:18.902068  498740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:50:18.902469  498740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:50:18.902872  498740 out.go:368] Setting JSON to false
	I1216 04:50:18.903773  498740 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12765,"bootTime":1765847854,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1216 04:50:18.903848  498740 start.go:143] virtualization:  
	I1216 04:50:18.907349  498740 out.go:179] * [functional-763073] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1216 04:50:18.910320  498740 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 04:50:18.910382  498740 notify.go:221] Checking for updates...
	I1216 04:50:18.915985  498740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:50:18.918708  498740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	I1216 04:50:18.921559  498740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	I1216 04:50:18.924475  498740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 04:50:18.927296  498740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:50:18.930735  498740 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:50:18.931384  498740 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:50:18.958508  498740 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1216 04:50:18.958646  498740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:50:19.021790  498740 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-16 04:50:19.012145995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:50:19.021898  498740 docker.go:319] overlay module found
	I1216 04:50:19.024862  498740 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 04:50:19.027562  498740 start.go:309] selected driver: docker
	I1216 04:50:19.027609  498740 start.go:927] validating driver "docker" against &{Name:functional-763073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-763073 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:50:19.027732  498740 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:50:19.031332  498740 out.go:203] 
	W1216 04:50:19.034101  498740 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 04:50:19.036872  498740 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh -n functional-763073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cp functional-763073:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1427695800/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh -n functional-763073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh -n functional-763073 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/441727/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo cat /etc/test/nested/copy/441727/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/441727.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo cat /etc/ssl/certs/441727.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/441727.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo cat /usr/share/ca-certificates/441727.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4417272.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo cat /etc/ssl/certs/4417272.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4417272.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo cat /usr/share/ca-certificates/4417272.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh "sudo systemctl is-active docker": exit status 1 (297.986755ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh "sudo systemctl is-active containerd": exit status 1 (271.108479ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-763073 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "329.488674ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.480273ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "316.149389ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.921252ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3173719408/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.769584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:50:12.269606  441727 retry.go:31] will retry after 487.387196ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3173719408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh "sudo umount -f /mount-9p": exit status 1 (272.115339ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-763073 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3173719408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T" /mount1: exit status 1 (604.693637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:50:14.377565  441727 retry.go:31] will retry after 445.236204ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-763073 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-763073 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1495930418/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-763073 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-763073
localhost/kicbase/echo-server:functional-763073
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-763073 image ls --format short --alsologtostderr:
I1216 04:50:31.649952  500939 out.go:360] Setting OutFile to fd 1 ...
I1216 04:50:31.650086  500939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:31.650095  500939 out.go:374] Setting ErrFile to fd 2...
I1216 04:50:31.650100  500939 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:31.650358  500939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:50:31.651227  500939 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:31.651361  500939 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:31.651886  500939 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:50:31.671351  500939 ssh_runner.go:195] Run: systemctl --version
I1216 04:50:31.671410  500939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:50:31.689094  500939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
I1216 04:50:31.787688  500939 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-763073 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/kicbase/echo-server           │ functional-763073  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/minikube-local-cache-test     │ functional-763073  │ 37fa4d23cdd0f │ 3.33kB │
│ localhost/my-image                      │ functional-763073  │ a978cbe0ca3ad │ 1.64MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-763073 image ls --format table --alsologtostderr:
I1216 04:50:36.018371  501437 out.go:360] Setting OutFile to fd 1 ...
I1216 04:50:36.018501  501437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:36.018511  501437 out.go:374] Setting ErrFile to fd 2...
I1216 04:50:36.018516  501437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:36.018795  501437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:50:36.019502  501437 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:36.019636  501437 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:36.020162  501437 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:50:36.039151  501437 ssh_runner.go:195] Run: systemctl --version
I1216 04:50:36.039208  501437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:50:36.057529  501437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
I1216 04:50:36.151752  501437 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-763073 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-763073"],"size":"4788229"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491
780"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc42
0b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"37fa4d23cdd0fd4520bc18cc4baf09518b21dab32ed4c0773492b94c7b3092c4","repoDigests":["localhost/minikube-local-cache-test@sha256:c438845dc3485fc01d9120817b748c94d6e5d77bef41fe1b6217a52166fd78c4"],"repoTags":["localhost/minikube-local-cache-test:functional-763073"],"size":"3328"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303a
bf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300"
,"repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"7f6bd96d83f5ca2852d26f0cb5bc7afd0861567f9df7ba27a3d1bba006d5c7e2","repoDigests":["docker.io/library/adc660eded595342b018c250d93cb9823b82853852ad0de5e5625bd6349e1162-tmp@sha256:8a868946ba93e27f8cf095e5f824fd650dfb819e27b3857b68fac671349d6e73"],"repoTags":[],"size":"1638179"},{"id":"a978cbe0ca3ad792a1b2a69095560d78fd3eb09ec4150f9ec61330f2fd6849d2","repoDigests":["localhost/my-image@sha256:5d8ec43c74f4cab49cf73599764853dad72684df918481e23808771e453143e5"],"repoTags":["localhost/my-image:functional-763073"],"size":"1640791"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"r
epoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-763073 image ls --format json --alsologtostderr:
I1216 04:50:35.783005  501393 out.go:360] Setting OutFile to fd 1 ...
I1216 04:50:35.783182  501393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:35.783191  501393 out.go:374] Setting ErrFile to fd 2...
I1216 04:50:35.783196  501393 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:35.783456  501393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:50:35.784049  501393 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:35.784188  501393 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:35.784686  501393 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:50:35.808258  501393 ssh_runner.go:195] Run: systemctl --version
I1216 04:50:35.808318  501393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:50:35.834224  501393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
I1216 04:50:35.932122  501393 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-763073 image ls --format yaml --alsologtostderr:
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-763073
size: "4788229"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 37fa4d23cdd0fd4520bc18cc4baf09518b21dab32ed4c0773492b94c7b3092c4
repoDigests:
- localhost/minikube-local-cache-test@sha256:c438845dc3485fc01d9120817b748c94d6e5d77bef41fe1b6217a52166fd78c4
repoTags:
- localhost/minikube-local-cache-test:functional-763073
size: "3328"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-763073 image ls --format yaml --alsologtostderr:
I1216 04:50:31.881623  500976 out.go:360] Setting OutFile to fd 1 ...
I1216 04:50:31.881773  500976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:31.881825  500976 out.go:374] Setting ErrFile to fd 2...
I1216 04:50:31.881839  500976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:31.882114  500976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:50:31.882855  500976 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:31.883012  500976 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:31.883578  500976 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:50:31.900811  500976 ssh_runner.go:195] Run: systemctl --version
I1216 04:50:31.900872  500976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:50:31.918917  500976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
I1216 04:50:32.020307  500976 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-763073 ssh pgrep buildkitd: exit status 1 (271.649252ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image build -t localhost/my-image:functional-763073 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-763073 image build -t localhost/my-image:functional-763073 testdata/build --alsologtostderr: (3.158414387s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-763073 image build -t localhost/my-image:functional-763073 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7f6bd96d83f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-763073
--> a978cbe0ca3
Successfully tagged localhost/my-image:functional-763073
a978cbe0ca3ad792a1b2a69095560d78fd3eb09ec4150f9ec61330f2fd6849d2
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-763073 image build -t localhost/my-image:functional-763073 testdata/build --alsologtostderr:
I1216 04:50:32.385567  501082 out.go:360] Setting OutFile to fd 1 ...
I1216 04:50:32.385714  501082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:32.385745  501082 out.go:374] Setting ErrFile to fd 2...
I1216 04:50:32.385757  501082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:50:32.386147  501082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
I1216 04:50:32.387425  501082 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:32.388147  501082 config.go:182] Loaded profile config "functional-763073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:50:32.388762  501082 cli_runner.go:164] Run: docker container inspect functional-763073 --format={{.State.Status}}
I1216 04:50:32.408621  501082 ssh_runner.go:195] Run: systemctl --version
I1216 04:50:32.408675  501082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-763073
I1216 04:50:32.426609  501082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/functional-763073/id_rsa Username:docker}
I1216 04:50:32.527845  501082 build_images.go:162] Building image from path: /tmp/build.2921597125.tar
I1216 04:50:32.527948  501082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 04:50:32.536245  501082 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2921597125.tar
I1216 04:50:32.540038  501082 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2921597125.tar: stat -c "%s %y" /var/lib/minikube/build/build.2921597125.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2921597125.tar': No such file or directory
I1216 04:50:32.540072  501082 ssh_runner.go:362] scp /tmp/build.2921597125.tar --> /var/lib/minikube/build/build.2921597125.tar (3072 bytes)
I1216 04:50:32.558295  501082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2921597125
I1216 04:50:32.566152  501082 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2921597125 -xf /var/lib/minikube/build/build.2921597125.tar
I1216 04:50:32.574512  501082 crio.go:315] Building image: /var/lib/minikube/build/build.2921597125
I1216 04:50:32.574640  501082 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-763073 /var/lib/minikube/build/build.2921597125 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1216 04:50:35.463834  501082 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-763073 /var/lib/minikube/build/build.2921597125 --cgroup-manager=cgroupfs: (2.889145762s)
I1216 04:50:35.463902  501082 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2921597125
I1216 04:50:35.471529  501082 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2921597125.tar
I1216 04:50:35.479809  501082 build_images.go:218] Built localhost/my-image:functional-763073 from /tmp/build.2921597125.tar
I1216 04:50:35.479840  501082 build_images.go:134] succeeded building to: functional-763073
I1216 04:50:35.479844  501082 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-763073
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image load --daemon kicbase/echo-server:functional-763073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image load --daemon kicbase/echo-server:functional-763073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls
E1216 04:50:24.308813  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-763073
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image load --daemon kicbase/echo-server:functional-763073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image save kicbase/echo-server:functional-763073 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image rm kicbase/echo-server:functional-763073 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-763073
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 image save --daemon kicbase/echo-server:functional-763073 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-763073
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-763073 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-763073
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-763073
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-763073
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 04:53:22.217531  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:22.647368  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:22.653817  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:22.665160  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:22.686558  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:22.727963  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:22.809363  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:22.970866  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:23.292442  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:23.934266  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:25.215609  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:27.778367  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:32.900221  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:43.141540  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:54:03.623344  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:54:44.584941  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:55:24.308005  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m18.321270073s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 kubectl -- rollout status deployment/busybox: (4.677899725s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-bdx7c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-fg8r8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-vh2zr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-bdx7c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-fg8r8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-vh2zr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-bdx7c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-fg8r8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-vh2zr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-bdx7c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-bdx7c -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-fg8r8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-fg8r8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-vh2zr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 kubectl -- exec busybox-7b57f96db7-vh2zr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 node add --alsologtostderr -v 5
E1216 04:56:06.506266  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 node add --alsologtostderr -v 5: (58.912884199s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5: (1.093371444s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-014666 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.026230826s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 status --output json --alsologtostderr -v 5: (1.035916266s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp testdata/cp-test.txt ha-014666:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691308913/001/cp-test_ha-014666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666:/home/docker/cp-test.txt ha-014666-m02:/home/docker/cp-test_ha-014666_ha-014666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test_ha-014666_ha-014666-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666:/home/docker/cp-test.txt ha-014666-m03:/home/docker/cp-test_ha-014666_ha-014666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test_ha-014666_ha-014666-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666:/home/docker/cp-test.txt ha-014666-m04:/home/docker/cp-test_ha-014666_ha-014666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test_ha-014666_ha-014666-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp testdata/cp-test.txt ha-014666-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691308913/001/cp-test_ha-014666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m02:/home/docker/cp-test.txt ha-014666:/home/docker/cp-test_ha-014666-m02_ha-014666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test_ha-014666-m02_ha-014666.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m02:/home/docker/cp-test.txt ha-014666-m03:/home/docker/cp-test_ha-014666-m02_ha-014666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test_ha-014666-m02_ha-014666-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m02:/home/docker/cp-test.txt ha-014666-m04:/home/docker/cp-test_ha-014666-m02_ha-014666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test_ha-014666-m02_ha-014666-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp testdata/cp-test.txt ha-014666-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691308913/001/cp-test_ha-014666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m03:/home/docker/cp-test.txt ha-014666:/home/docker/cp-test_ha-014666-m03_ha-014666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test_ha-014666-m03_ha-014666.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m03:/home/docker/cp-test.txt ha-014666-m02:/home/docker/cp-test_ha-014666-m03_ha-014666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test_ha-014666-m03_ha-014666-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m03:/home/docker/cp-test.txt ha-014666-m04:/home/docker/cp-test_ha-014666-m03_ha-014666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test_ha-014666-m03_ha-014666-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp testdata/cp-test.txt ha-014666-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile691308913/001/cp-test_ha-014666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m04:/home/docker/cp-test.txt ha-014666:/home/docker/cp-test_ha-014666-m04_ha-014666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666 "sudo cat /home/docker/cp-test_ha-014666-m04_ha-014666.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m04:/home/docker/cp-test.txt ha-014666-m02:/home/docker/cp-test_ha-014666-m04_ha-014666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m02 "sudo cat /home/docker/cp-test_ha-014666-m04_ha-014666-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 cp ha-014666-m04:/home/docker/cp-test.txt ha-014666-m03:/home/docker/cp-test_ha-014666-m04_ha-014666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 ssh -n ha-014666-m03 "sudo cat /home/docker/cp-test_ha-014666-m04_ha-014666-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 node stop m02 --alsologtostderr -v 5: (12.046854298s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5: exit status 7 (817.533415ms)

                                                
                                                
-- stdout --
	ha-014666
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-014666-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-014666-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-014666-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:57:28.609186  517273 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:57:28.609354  517273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:28.609366  517273 out.go:374] Setting ErrFile to fd 2...
	I1216 04:57:28.609372  517273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:57:28.609673  517273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 04:57:28.610002  517273 out.go:368] Setting JSON to false
	I1216 04:57:28.610032  517273 mustload.go:66] Loading cluster: ha-014666
	I1216 04:57:28.610190  517273 notify.go:221] Checking for updates...
	I1216 04:57:28.610507  517273 config.go:182] Loaded profile config "ha-014666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:57:28.610527  517273 status.go:174] checking status of ha-014666 ...
	I1216 04:57:28.611472  517273 cli_runner.go:164] Run: docker container inspect ha-014666 --format={{.State.Status}}
	I1216 04:57:28.636001  517273 status.go:371] ha-014666 host status = "Running" (err=<nil>)
	I1216 04:57:28.636021  517273 host.go:66] Checking if "ha-014666" exists ...
	I1216 04:57:28.636319  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-014666
	I1216 04:57:28.667254  517273 host.go:66] Checking if "ha-014666" exists ...
	I1216 04:57:28.667583  517273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:57:28.667632  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-014666
	I1216 04:57:28.691958  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/ha-014666/id_rsa Username:docker}
	I1216 04:57:28.790892  517273 ssh_runner.go:195] Run: systemctl --version
	I1216 04:57:28.797884  517273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:57:28.811812  517273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:57:28.898891  517273 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-16 04:57:28.888419593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 04:57:28.899469  517273 kubeconfig.go:125] found "ha-014666" server: "https://192.168.49.254:8443"
	I1216 04:57:28.899493  517273 api_server.go:166] Checking apiserver status ...
	I1216 04:57:28.899560  517273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:57:28.912143  517273 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	I1216 04:57:28.922276  517273 api_server.go:182] apiserver freezer: "8:freezer:/docker/9c4dd1f13f8d15d44276211bd2ced0073e1011778017ce301120ca28b8b0643b/crio/crio-f999ed1c7b64e470f27414b8b53925315d1530b8f94727c1373867b52c39b6bf"
	I1216 04:57:28.922344  517273 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9c4dd1f13f8d15d44276211bd2ced0073e1011778017ce301120ca28b8b0643b/crio/crio-f999ed1c7b64e470f27414b8b53925315d1530b8f94727c1373867b52c39b6bf/freezer.state
	I1216 04:57:28.930626  517273 api_server.go:204] freezer state: "THAWED"
	I1216 04:57:28.930655  517273 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 04:57:28.939229  517273 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 04:57:28.939260  517273 status.go:463] ha-014666 apiserver status = Running (err=<nil>)
	I1216 04:57:28.939271  517273 status.go:176] ha-014666 status: &{Name:ha-014666 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:57:28.939287  517273 status.go:174] checking status of ha-014666-m02 ...
	I1216 04:57:28.939600  517273 cli_runner.go:164] Run: docker container inspect ha-014666-m02 --format={{.State.Status}}
	I1216 04:57:28.962327  517273 status.go:371] ha-014666-m02 host status = "Stopped" (err=<nil>)
	I1216 04:57:28.962349  517273 status.go:384] host is not running, skipping remaining checks
	I1216 04:57:28.962356  517273 status.go:176] ha-014666-m02 status: &{Name:ha-014666-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:57:28.962375  517273 status.go:174] checking status of ha-014666-m03 ...
	I1216 04:57:28.962693  517273 cli_runner.go:164] Run: docker container inspect ha-014666-m03 --format={{.State.Status}}
	I1216 04:57:28.982274  517273 status.go:371] ha-014666-m03 host status = "Running" (err=<nil>)
	I1216 04:57:28.982299  517273 host.go:66] Checking if "ha-014666-m03" exists ...
	I1216 04:57:28.982622  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-014666-m03
	I1216 04:57:29.015775  517273 host.go:66] Checking if "ha-014666-m03" exists ...
	I1216 04:57:29.016110  517273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:57:29.016157  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-014666-m03
	I1216 04:57:29.034108  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/ha-014666-m03/id_rsa Username:docker}
	I1216 04:57:29.131361  517273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:57:29.144675  517273 kubeconfig.go:125] found "ha-014666" server: "https://192.168.49.254:8443"
	I1216 04:57:29.144712  517273 api_server.go:166] Checking apiserver status ...
	I1216 04:57:29.144790  517273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:57:29.157381  517273 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1213/cgroup
	I1216 04:57:29.166162  517273 api_server.go:182] apiserver freezer: "8:freezer:/docker/bbac613628dec7d49ca9885238f861811ee7b1a328a9e3a10d1c5bba9d8fd82b/crio/crio-f8113c906d024d72db2ac1494762d563faa36609ff985ed5db2d8d208a4fc41c"
	I1216 04:57:29.166241  517273 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bbac613628dec7d49ca9885238f861811ee7b1a328a9e3a10d1c5bba9d8fd82b/crio/crio-f8113c906d024d72db2ac1494762d563faa36609ff985ed5db2d8d208a4fc41c/freezer.state
	I1216 04:57:29.175164  517273 api_server.go:204] freezer state: "THAWED"
	I1216 04:57:29.175197  517273 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 04:57:29.183728  517273 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 04:57:29.183757  517273 status.go:463] ha-014666-m03 apiserver status = Running (err=<nil>)
	I1216 04:57:29.183766  517273 status.go:176] ha-014666-m03 status: &{Name:ha-014666-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:57:29.183786  517273 status.go:174] checking status of ha-014666-m04 ...
	I1216 04:57:29.184104  517273 cli_runner.go:164] Run: docker container inspect ha-014666-m04 --format={{.State.Status}}
	I1216 04:57:29.202917  517273 status.go:371] ha-014666-m04 host status = "Running" (err=<nil>)
	I1216 04:57:29.202944  517273 host.go:66] Checking if "ha-014666-m04" exists ...
	I1216 04:57:29.203260  517273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-014666-m04
	I1216 04:57:29.221869  517273 host.go:66] Checking if "ha-014666-m04" exists ...
	I1216 04:57:29.222190  517273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:57:29.222237  517273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-014666-m04
	I1216 04:57:29.239507  517273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/ha-014666-m04/id_rsa Username:docker}
	I1216 04:57:29.334215  517273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:57:29.350758  517273 status.go:176] ha-014666-m04 status: &{Name:ha-014666-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 node start m02 --alsologtostderr -v 5: (19.028700905s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5: (1.198333721s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.133328221s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 stop --alsologtostderr -v 5: (26.682409205s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 start --wait true --alsologtostderr -v 5
E1216 04:58:22.214059  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:58:22.646516  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:58:27.377738  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:58:50.353244  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:00:24.308643  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 start --wait true --alsologtostderr -v 5: (3m2.15024443s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (30.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 node delete m03 --alsologtostderr -v 5: (29.837149503s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (30.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 stop --alsologtostderr -v 5: (35.945685629s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5: exit status 7 (116.329493ms)

                                                
                                                
-- stdout --
	ha-014666
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-014666-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-014666-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:02:28.253459  529108 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:02:28.253888  529108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:28.253924  529108 out.go:374] Setting ErrFile to fd 2...
	I1216 05:02:28.253946  529108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:28.254279  529108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:02:28.254530  529108 out.go:368] Setting JSON to false
	I1216 05:02:28.254584  529108 mustload.go:66] Loading cluster: ha-014666
	I1216 05:02:28.255042  529108 config.go:182] Loaded profile config "ha-014666": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:02:28.255086  529108 status.go:174] checking status of ha-014666 ...
	I1216 05:02:28.255658  529108 cli_runner.go:164] Run: docker container inspect ha-014666 --format={{.State.Status}}
	I1216 05:02:28.255723  529108 notify.go:221] Checking for updates...
	I1216 05:02:28.273508  529108 status.go:371] ha-014666 host status = "Stopped" (err=<nil>)
	I1216 05:02:28.273528  529108 status.go:384] host is not running, skipping remaining checks
	I1216 05:02:28.273535  529108 status.go:176] ha-014666 status: &{Name:ha-014666 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:02:28.273561  529108 status.go:174] checking status of ha-014666-m02 ...
	I1216 05:02:28.273868  529108 cli_runner.go:164] Run: docker container inspect ha-014666-m02 --format={{.State.Status}}
	I1216 05:02:28.297258  529108 status.go:371] ha-014666-m02 host status = "Stopped" (err=<nil>)
	I1216 05:02:28.297282  529108 status.go:384] host is not running, skipping remaining checks
	I1216 05:02:28.297290  529108 status.go:176] ha-014666-m02 status: &{Name:ha-014666-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:02:28.297310  529108 status.go:174] checking status of ha-014666-m04 ...
	I1216 05:02:28.297611  529108 cli_runner.go:164] Run: docker container inspect ha-014666-m04 --format={{.State.Status}}
	I1216 05:02:28.321338  529108 status.go:371] ha-014666-m04 host status = "Stopped" (err=<nil>)
	I1216 05:02:28.321363  529108 status.go:384] host is not running, skipping remaining checks
	I1216 05:02:28.321370  529108 status.go:176] ha-014666-m04 status: &{Name:ha-014666-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (83.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1216 05:03:22.214699  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:03:22.646694  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m22.862897571s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (83.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 node add --control-plane --alsologtostderr -v 5: (1m21.186792121s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-014666 status --alsologtostderr -v 5: (1.063428999s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.071571571s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-212065 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-212065 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.417472369s)
--- PASS: TestJSONOutput/start/Command (80.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-212065 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-212065 --output=json --user=testUser: (5.843675671s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-541012 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-541012 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.017037ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8fa8e0b0-b18a-41a6-b404-d12304607545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-541012] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"109305ea-8125-4e5b-b5bf-46b43ff793e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22158"}}
	{"specversion":"1.0","id":"38e2ddca-a2d3-4bfd-bfe0-ebc4423d4346","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f8f12ac5-d582-433d-98d9-cd3e9f11dd1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig"}}
	{"specversion":"1.0","id":"11e3c4bd-03a2-47b4-af95-2bfd65254105","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube"}}
	{"specversion":"1.0","id":"e63a04c9-3528-456d-80c9-c96a6756d175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"161507f6-ee6a-4bbb-9623-42ad43d9513d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cb5cac45-855e-4e93-9772-f9cae60b5239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-541012" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-541012
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-499134 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-499134 --network=: (40.841575744s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-499134" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-499134
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-499134: (2.213629199s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-254080 --network=bridge
E1216 05:08:05.288605  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-254080 --network=bridge: (33.571290845s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-254080" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-254080
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-254080: (2.126465468s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.72s)

                                                
                                    
x
+
TestKicExistingNetwork (37.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 05:08:18.631833  441727 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 05:08:18.647553  441727 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 05:08:18.647637  441727 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 05:08:18.647656  441727 cli_runner.go:164] Run: docker network inspect existing-network
W1216 05:08:18.664210  441727 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 05:08:18.664241  441727 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 05:08:18.664256  441727 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 05:08:18.664385  441727 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 05:08:18.683219  441727 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-66a1741c73ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:45:79:86:27:66} reservation:<nil>}
I1216 05:08:18.683568  441727 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000366920}
I1216 05:08:18.683596  441727 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 05:08:18.683649  441727 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 05:08:18.753922  441727 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-952656 --network=existing-network
E1216 05:08:22.213867  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:08:22.646817  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-952656 --network=existing-network: (35.305099348s)
helpers_test.go:176: Cleaning up "existing-network-952656" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-952656
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-952656: (2.175352581s)
I1216 05:08:56.250579  441727 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.64s)

                                                
                                    
x
+
TestKicCustomSubnet (37.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-428018 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-428018 --subnet=192.168.60.0/24: (35.501400105s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-428018 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-428018" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-428018
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-428018: (2.201944165s)
--- PASS: TestKicCustomSubnet (37.72s)

                                                
                                    
x
+
TestKicStaticIP (36.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-672449 --static-ip=192.168.200.200
E1216 05:09:45.715627  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-672449 --static-ip=192.168.200.200: (34.463542479s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-672449 ip
helpers_test.go:176: Cleaning up "static-ip-672449" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-672449
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-672449: (2.312352437s)
--- PASS: TestKicStaticIP (36.94s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-474437 --driver=docker  --container-runtime=crio
E1216 05:10:24.307813  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-474437 --driver=docker  --container-runtime=crio: (33.653656005s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-476890 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-476890 --driver=docker  --container-runtime=crio: (34.129553603s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-474437
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-476890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-476890" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-476890
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-476890: (2.104915853s)
helpers_test.go:176: Cleaning up "first-474437" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-474437
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-474437: (2.044004423s)
--- PASS: TestMinikubeProfile (73.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-794677 --memory=3072 --mount-string /tmp/TestMountStartserial525002064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-794677 --memory=3072 --mount-string /tmp/TestMountStartserial525002064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.991665351s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-794677 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-796884 --memory=3072 --mount-string /tmp/TestMountStartserial525002064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-796884 --memory=3072 --mount-string /tmp/TestMountStartserial525002064/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.665814726s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-796884 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-794677 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-794677 --alsologtostderr -v=5: (1.716069484s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-796884 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-796884
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-796884: (1.295644538s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-796884
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-796884: (6.937561434s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-796884 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-723048 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1216 05:13:22.214076  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:13:22.647214  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-723048 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.58475619s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-723048 -- rollout status deployment/busybox: (3.191859856s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-qmrpv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-xfqsn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-qmrpv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-xfqsn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-qmrpv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-xfqsn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-qmrpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-qmrpv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-xfqsn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-723048 -- exec busybox-7b57f96db7-xfqsn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-723048 -v=5 --alsologtostderr
E1216 05:15:07.380367  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-723048 -v=5 --alsologtostderr: (53.834726589s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-723048 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp testdata/cp-test.txt multinode-723048:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile784218662/001/cp-test_multinode-723048.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048:/home/docker/cp-test.txt multinode-723048-m02:/home/docker/cp-test_multinode-723048_multinode-723048-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m02 "sudo cat /home/docker/cp-test_multinode-723048_multinode-723048-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048:/home/docker/cp-test.txt multinode-723048-m03:/home/docker/cp-test_multinode-723048_multinode-723048-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m03 "sudo cat /home/docker/cp-test_multinode-723048_multinode-723048-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp testdata/cp-test.txt multinode-723048-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile784218662/001/cp-test_multinode-723048-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048-m02:/home/docker/cp-test.txt multinode-723048:/home/docker/cp-test_multinode-723048-m02_multinode-723048.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048 "sudo cat /home/docker/cp-test_multinode-723048-m02_multinode-723048.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048-m02:/home/docker/cp-test.txt multinode-723048-m03:/home/docker/cp-test_multinode-723048-m02_multinode-723048-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m03 "sudo cat /home/docker/cp-test_multinode-723048-m02_multinode-723048-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp testdata/cp-test.txt multinode-723048-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile784218662/001/cp-test_multinode-723048-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048-m03:/home/docker/cp-test.txt multinode-723048:/home/docker/cp-test_multinode-723048-m03_multinode-723048.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048 "sudo cat /home/docker/cp-test_multinode-723048-m03_multinode-723048.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 cp multinode-723048-m03:/home/docker/cp-test.txt multinode-723048-m02:/home/docker/cp-test_multinode-723048-m03_multinode-723048-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m03 "sudo cat /home/docker/cp-test.txt"
E1216 05:15:24.308780  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 ssh -n multinode-723048-m02 "sudo cat /home/docker/cp-test_multinode-723048-m03_multinode-723048-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-723048 node stop m03: (1.326202301s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-723048 status: exit status 7 (516.180821ms)

                                                
                                                
-- stdout --
	multinode-723048
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-723048-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-723048-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr: exit status 7 (536.035751ms)

                                                
                                                
-- stdout --
	multinode-723048
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-723048-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-723048-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:15:26.568997  579685 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:15:26.569276  579685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:15:26.569292  579685 out.go:374] Setting ErrFile to fd 2...
	I1216 05:15:26.569298  579685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:15:26.569587  579685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:15:26.569815  579685 out.go:368] Setting JSON to false
	I1216 05:15:26.569858  579685 mustload.go:66] Loading cluster: multinode-723048
	I1216 05:15:26.569964  579685 notify.go:221] Checking for updates...
	I1216 05:15:26.570295  579685 config.go:182] Loaded profile config "multinode-723048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:15:26.570317  579685 status.go:174] checking status of multinode-723048 ...
	I1216 05:15:26.571199  579685 cli_runner.go:164] Run: docker container inspect multinode-723048 --format={{.State.Status}}
	I1216 05:15:26.591799  579685 status.go:371] multinode-723048 host status = "Running" (err=<nil>)
	I1216 05:15:26.591830  579685 host.go:66] Checking if "multinode-723048" exists ...
	I1216 05:15:26.592191  579685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-723048
	I1216 05:15:26.615705  579685 host.go:66] Checking if "multinode-723048" exists ...
	I1216 05:15:26.616019  579685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:15:26.616074  579685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-723048
	I1216 05:15:26.634390  579685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/multinode-723048/id_rsa Username:docker}
	I1216 05:15:26.731027  579685 ssh_runner.go:195] Run: systemctl --version
	I1216 05:15:26.737802  579685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:15:26.750982  579685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:15:26.812795  579685 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-16 05:15:26.802382626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1216 05:15:26.813508  579685 kubeconfig.go:125] found "multinode-723048" server: "https://192.168.67.2:8443"
	I1216 05:15:26.813569  579685 api_server.go:166] Checking apiserver status ...
	I1216 05:15:26.813617  579685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:15:26.825271  579685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I1216 05:15:26.833712  579685 api_server.go:182] apiserver freezer: "8:freezer:/docker/119de994f998fc506de2d97cf9279213e085fd103e6b1d091b58d790d009b88c/crio/crio-368fd5464c247ffb2081fce241c1ff08c8d5ddd9c97a86e1f5233f87af0572ee"
	I1216 05:15:26.833784  579685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/119de994f998fc506de2d97cf9279213e085fd103e6b1d091b58d790d009b88c/crio/crio-368fd5464c247ffb2081fce241c1ff08c8d5ddd9c97a86e1f5233f87af0572ee/freezer.state
	I1216 05:15:26.841838  579685 api_server.go:204] freezer state: "THAWED"
	I1216 05:15:26.841865  579685 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 05:15:26.850013  579685 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 05:15:26.850046  579685 status.go:463] multinode-723048 apiserver status = Running (err=<nil>)
	I1216 05:15:26.850075  579685 status.go:176] multinode-723048 status: &{Name:multinode-723048 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:15:26.850100  579685 status.go:174] checking status of multinode-723048-m02 ...
	I1216 05:15:26.850449  579685 cli_runner.go:164] Run: docker container inspect multinode-723048-m02 --format={{.State.Status}}
	I1216 05:15:26.870269  579685 status.go:371] multinode-723048-m02 host status = "Running" (err=<nil>)
	I1216 05:15:26.870293  579685 host.go:66] Checking if "multinode-723048-m02" exists ...
	I1216 05:15:26.870605  579685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-723048-m02
	I1216 05:15:26.894190  579685 host.go:66] Checking if "multinode-723048-m02" exists ...
	I1216 05:15:26.894504  579685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:15:26.894547  579685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-723048-m02
	I1216 05:15:26.911652  579685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/22158-438353/.minikube/machines/multinode-723048-m02/id_rsa Username:docker}
	I1216 05:15:27.009412  579685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:15:27.022663  579685 status.go:176] multinode-723048-m02 status: &{Name:multinode-723048-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:15:27.022697  579685 status.go:174] checking status of multinode-723048-m03 ...
	I1216 05:15:27.023043  579685 cli_runner.go:164] Run: docker container inspect multinode-723048-m03 --format={{.State.Status}}
	I1216 05:15:27.040936  579685 status.go:371] multinode-723048-m03 host status = "Stopped" (err=<nil>)
	I1216 05:15:27.040959  579685 status.go:384] host is not running, skipping remaining checks
	I1216 05:15:27.040966  579685 status.go:176] multinode-723048-m03 status: &{Name:multinode-723048-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-723048 node start m03 -v=5 --alsologtostderr: (7.219314689s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-723048
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-723048
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-723048: (25.238329672s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-723048 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-723048 --wait=true -v=5 --alsologtostderr: (49.110571532s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-723048
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-723048 node delete m03: (4.928269526s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-723048 stop: (23.88587936s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-723048 status: exit status 7 (94.615109ms)

                                                
                                                
-- stdout --
	multinode-723048
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-723048-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr: exit status 7 (103.995007ms)

                                                
                                                
-- stdout --
	multinode-723048
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-723048-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:17:19.106759  587540 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:17:19.106877  587540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:19.106887  587540 out.go:374] Setting ErrFile to fd 2...
	I1216 05:17:19.106893  587540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:19.107164  587540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:17:19.107337  587540 out.go:368] Setting JSON to false
	I1216 05:17:19.107379  587540 mustload.go:66] Loading cluster: multinode-723048
	I1216 05:17:19.107450  587540 notify.go:221] Checking for updates...
	I1216 05:17:19.108763  587540 config.go:182] Loaded profile config "multinode-723048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:17:19.108796  587540 status.go:174] checking status of multinode-723048 ...
	I1216 05:17:19.109538  587540 cli_runner.go:164] Run: docker container inspect multinode-723048 --format={{.State.Status}}
	I1216 05:17:19.129471  587540 status.go:371] multinode-723048 host status = "Stopped" (err=<nil>)
	I1216 05:17:19.129494  587540 status.go:384] host is not running, skipping remaining checks
	I1216 05:17:19.129501  587540 status.go:176] multinode-723048 status: &{Name:multinode-723048 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:17:19.129538  587540 status.go:174] checking status of multinode-723048-m02 ...
	I1216 05:17:19.129842  587540 cli_runner.go:164] Run: docker container inspect multinode-723048-m02 --format={{.State.Status}}
	I1216 05:17:19.158642  587540 status.go:371] multinode-723048-m02 host status = "Stopped" (err=<nil>)
	I1216 05:17:19.158667  587540 status.go:384] host is not running, skipping remaining checks
	I1216 05:17:19.158674  587540 status.go:176] multinode-723048-m02 status: &{Name:multinode-723048-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-723048 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-723048 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.365388624s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-723048 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-723048
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-723048-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-723048-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.14846ms)

                                                
                                                
-- stdout --
	* [multinode-723048-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-723048-m02' is duplicated with machine name 'multinode-723048-m02' in profile 'multinode-723048'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-723048-m03 --driver=docker  --container-runtime=crio
E1216 05:18:22.214422  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:18:22.646954  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-723048-m03 --driver=docker  --container-runtime=crio: (33.282048809s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-723048
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-723048: exit status 80 (332.888313ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-723048 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-723048-m03 already exists in multinode-723048-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-723048-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-723048-m03: (2.124855995s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.88s)

                                                
                                    
x
+
TestPreload (120.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-106302 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-106302 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m0.677460357s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-106302 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-106302 image pull gcr.io/k8s-minikube/busybox: (2.213962431s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-106302
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-106302: (5.912916637s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-106302 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1216 05:20:24.307817  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-106302 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.403374587s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-106302 image list
helpers_test.go:176: Cleaning up "test-preload-106302" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-106302
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-106302: (2.448987009s)
--- PASS: TestPreload (120.89s)

                                                
                                    
x
+
TestScheduledStopUnix (111.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-232318 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-232318 --memory=3072 --driver=docker  --container-runtime=crio: (34.708697675s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-232318 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 05:21:25.136224  601621 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:21:25.136400  601621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:25.136438  601621 out.go:374] Setting ErrFile to fd 2...
	I1216 05:21:25.136460  601621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:25.136874  601621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:21:25.137296  601621 out.go:368] Setting JSON to false
	I1216 05:21:25.137480  601621 mustload.go:66] Loading cluster: scheduled-stop-232318
	I1216 05:21:25.138280  601621 config.go:182] Loaded profile config "scheduled-stop-232318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:21:25.138926  601621 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/config.json ...
	I1216 05:21:25.139201  601621 mustload.go:66] Loading cluster: scheduled-stop-232318
	I1216 05:21:25.139389  601621 config.go:182] Loaded profile config "scheduled-stop-232318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-232318 -n scheduled-stop-232318
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 05:21:25.598896  601710 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:21:25.599026  601710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:25.599037  601710 out.go:374] Setting ErrFile to fd 2...
	I1216 05:21:25.599042  601710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:25.599323  601710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:21:25.599589  601710 out.go:368] Setting JSON to false
	I1216 05:21:25.600435  601710 daemonize_unix.go:73] killing process 601637 as it is an old scheduled stop
	I1216 05:21:25.604174  601710 mustload.go:66] Loading cluster: scheduled-stop-232318
	I1216 05:21:25.604647  601710 config.go:182] Loaded profile config "scheduled-stop-232318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:21:25.604744  601710 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/config.json ...
	I1216 05:21:25.604926  601710 mustload.go:66] Loading cluster: scheduled-stop-232318
	I1216 05:21:25.605048  601710 config.go:182] Loaded profile config "scheduled-stop-232318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1216 05:21:25.610993  441727 retry.go:31] will retry after 144.795µs: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.612162  441727 retry.go:31] will retry after 90.331µs: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.613292  441727 retry.go:31] will retry after 287.897µs: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.614434  441727 retry.go:31] will retry after 484.952µs: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.615594  441727 retry.go:31] will retry after 410.207µs: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.616739  441727 retry.go:31] will retry after 518.491µs: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.617827  441727 retry.go:31] will retry after 1.278292ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.620061  441727 retry.go:31] will retry after 2.194257ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.623256  441727 retry.go:31] will retry after 3.603887ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.627504  441727 retry.go:31] will retry after 5.232472ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.633702  441727 retry.go:31] will retry after 5.216374ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.639540  441727 retry.go:31] will retry after 12.268988ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.653656  441727 retry.go:31] will retry after 11.154461ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.665884  441727 retry.go:31] will retry after 18.189984ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.684472  441727 retry.go:31] will retry after 20.619428ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
I1216 05:21:25.705710  441727 retry.go:31] will retry after 65.388225ms: open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-232318 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-232318 -n scheduled-stop-232318
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-232318
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-232318 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 05:21:51.564481  602075 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:21:51.564662  602075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:51.564673  602075 out.go:374] Setting ErrFile to fd 2...
	I1216 05:21:51.564679  602075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:51.565339  602075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-438353/.minikube/bin
	I1216 05:21:51.565901  602075 out.go:368] Setting JSON to false
	I1216 05:21:51.566043  602075 mustload.go:66] Loading cluster: scheduled-stop-232318
	I1216 05:21:51.566445  602075 config.go:182] Loaded profile config "scheduled-stop-232318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:21:51.566576  602075 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/scheduled-stop-232318/config.json ...
	I1216 05:21:51.566834  602075 mustload.go:66] Loading cluster: scheduled-stop-232318
	I1216 05:21:51.566994  602075 config.go:182] Loaded profile config "scheduled-stop-232318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-232318
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-232318: exit status 7 (71.420816ms)

                                                
                                                
-- stdout --
	scheduled-stop-232318
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-232318 -n scheduled-stop-232318
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-232318 -n scheduled-stop-232318: exit status 7 (72.089229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-232318" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-232318
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-232318: (5.201190166s)
--- PASS: TestScheduledStopUnix (111.55s)

                                                
                                    
x
+
TestInsufficientStorage (13.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-725109 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-725109 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.44577095s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"990453ca-14ec-4cfa-89e1-7dceb0a73861","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-725109] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"feb92258-2883-4a53-ac61-0b394a46d677","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22158"}}
	{"specversion":"1.0","id":"19747d10-a5c0-4b36-9e0d-796b6fa6f6f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"446b9a70-048f-4dc5-b511-0adc3203eb65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig"}}
	{"specversion":"1.0","id":"038aac1c-fde3-4237-8a26-7fc8bae3b6c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube"}}
	{"specversion":"1.0","id":"92e7f576-28e5-4c5e-8937-3d127e0cd803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b4fb9a5b-62b9-4e04-8e6e-4ac2169c2236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"951d0186-c1ab-4c93-8dd7-d1f1af60f3d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b0fb8a68-690a-45f1-982a-d98537cf0cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"840744f4-c590-467c-9639-caf9fb2a461c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"efcf2887-3343-4576-9b70-b252127d600b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7a9cce8b-329d-4737-bc82-d931dc436bc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-725109\" primary control-plane node in \"insufficient-storage-725109\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"052f5582-8838-472b-8f21-27e7cbf50acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765575274-22117 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0df80f35-955d-4ac6-8583-a9017af3ffc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c0158a4-ce90-4b2b-92c9-8dd6fa1b350e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-725109 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-725109 --output=json --layout=cluster: exit status 7 (294.869632ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-725109","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-725109","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 05:22:52.642720  603783 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-725109" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-725109 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-725109 --output=json --layout=cluster: exit status 7 (304.352114ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-725109","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-725109","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 05:22:52.945898  603846 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-725109" does not appear in /home/jenkins/minikube-integration/22158-438353/kubeconfig
	E1216 05:22:52.955835  603846 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/insufficient-storage-725109/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-725109" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-725109
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-725109: (1.973257375s)
--- PASS: TestInsufficientStorage (13.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (299.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2693563620 start -p running-upgrade-383657 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2693563620 start -p running-upgrade-383657 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.621493203s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-383657 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 05:28:22.214088  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:28:22.647172  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:30:24.308508  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:31:47.381886  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-861171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-383657 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.07938883s)
helpers_test.go:176: Cleaning up "running-upgrade-383657" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-383657
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-383657: (2.044131885s)
--- PASS: TestRunningBinaryUpgrade (299.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2408755967 start -p missing-upgrade-508979 --memory=3072 --driver=docker  --container-runtime=crio
E1216 05:24:45.289957  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2408755967 start -p missing-upgrade-508979 --memory=3072 --driver=docker  --container-runtime=crio: (1m11.417309331s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-508979
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-508979
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-508979 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-508979 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.211076135s)
helpers_test.go:176: Cleaning up "missing-upgrade-508979" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-508979
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-508979: (2.195912042s)
--- PASS: TestMissingContainerUpgrade (119.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-868033 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-868033 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (121.359754ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-868033] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-438353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-438353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestPause/serial/Start (88.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-879168 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-879168 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m28.629406712s)
--- PASS: TestPause/serial/Start (88.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-868033 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1216 05:23:22.213903  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/addons-266389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:23:22.646823  441727 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-438353/.minikube/profiles/functional-763073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-868033 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.460746413s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-868033 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.46727022s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-868033 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-868033 status -o json: exit status 2 (327.989951ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-868033","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-868033
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-868033: (2.042727157s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-868033 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.918862892s)
--- PASS: TestNoKubernetes/serial/Start (8.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22158-438353/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-868033 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-868033 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.505578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-868033
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-868033: (1.350367687s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-868033 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-868033 --driver=docker  --container-runtime=crio: (6.92697763s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-879168 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-879168 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.611827s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-868033 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-868033 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.344279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (63.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.226303699 start -p stopped-upgrade-221012 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.226303699 start -p stopped-upgrade-221012 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.882526605s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.226303699 -p stopped-upgrade-221012 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.226303699 -p stopped-upgrade-221012 stop: (1.256567751s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-221012 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-221012 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.314275651s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (63.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-221012
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-221012: (1.558015959s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.56s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.54
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-461022 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-461022" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-461022
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard